• Create a problem, sell the solution

    Monthly plans for protection against AI start at $299.99 for the basic tier.

    I mean it could be free; except you have to turn your poop into fuel or something like that.

    And get a chip implanted in your head and watch 6 hours of ads.

    I mean.. how else are you going to generate all that sweet passive income when you're sleeping?

    My dreams are average enough, why not throw in a product placement or Two?

    Nah you will speak out the ads and don't remember doing so. Black mirror style.

    That’s crazy, dude.

    Almost as crazy as the new zesty ranch flavor of Cool Ranch Doritos.

    That’ll never happen.

    Did you know you just advertised for Black Mirror?! Insidious!

    Hey whatever works. Do you know about our miracle dishwasher detergent FatGoneNow? Stay tuned ! /s

    Cyberpunk's Blackwall

    *Includes AI generated Personal Ads... Featuring You. Now you can see exactly how you look with our product. Payfor_the'not_you'_upgrade_to_get_your_likeness_removed.

    Brought to you by Norton 360

    I think you mean "Create a problem, sell something you call the solution, don't bother looking into an actual solution because you have a product already, pay lawmakers not to look at the problem".

    It is seeming more like "sell a solution, then force the customer to create problems to legitimize their purchase."

    I asked my coworkers what they are actually using AI for. Basically, one or two use it as an occasional writing assistant (with review because it get things wrong), one uses it literally as google, and someone suggested to use it to read recordings of meetings and summarize, but there were security concerns because it doesn't understand proprietary information.

    They quickly just transitioned into the standard sales pitch of what AI is, and not how it could actually be useful. But we've already spent millions on it, so if we dont use it, important people will be big mad and embarrassed.

    AI isn't the solution in this scenario though? The whole point of the skit is that AI is a problem and no actual solution has been made but 'People are totally working on it'tm.

    HonestlyI think both are true. AI is a half baked solution for a mostly non existent problem, and it being sold half baked will probably lead to more problems.

    It's the American way. Ask the Sacklers.

    "Opioids weren't the problem. Getting caught was the problem." - the Sacksofshit

    Sell the promise of a solution.

    Or perhaps “Sell the concept of a solution” is more appropriate phrasing?

    Overtones of the movie Don't Look Up (2021)

    which clearly feels more real everyday

    I don't get it. How is AI the solution to the AI problem?

    The only problem AI solves is how to eliminate wages and full time employees along with their required benefits packages.

    AI solves a bunch of problems. That's why it's used so much. It introduces te same problems as all automation.

    The question is what solution does it give for the problems caused by automation.

    Similar thing with the current PC market: a lot of companies heavily investing in AI are also the same companies that try selling cloud computing BTW

    So it looks like they could price the end users out of having physical PCs by making them exorbitantly expensive, and then sell us thin clients with computers that we don't just "don't own", but literally cannot control, as they are 100% Google\Nvidia owned.

    At this rate, they could even provide always-online phones with barebone hardware "to keep with pricing and demand" or whatever.

    So it looks like they could price the end users out of having physical PCs by making them exorbitantly expensive, and then sell us thin clients with computers that we don't just "don't own", but literally cannot control, as they are 100% Google\Nvidia owned.

    I am absolutely, positively, 100% sure that this proposition has come up in at least one meeting at Google/Microsoft/Amazon/Oracle, etc. And I bet it got positive feedback.

    Stadia was that, issue is infrastructure is already in place for Pc to server, the cost to upgrade everyone to GB+ speeds for any real benefit and re-distribute a worse product, is just prohibitively expensive.

  • I've taught several people how to drive. The first thing I teach is how to use the brakes. AI companies are fully fixated on the gas pedal.

    They are all fixated on reaching AGI so they can have a revolutionary head-start on everyone else, except no one knows how-to and hoping that throwing more money at it will solve their problems (they are not).

    It would be funny if it wasn't for the catastrophic economic recession we are all going to suffer because people at the helm are gambling our future away.

    Thing is that AGI isn't possible with the model of LLM.

    You're just making a database of predictive text. It's like expecting an partially sorted library to build you a car. You keep thinking if you just shove more and more tweets and reddit threads, surely then it will build a new type of car.

    you're not any closer to an actual solution and you just make the library worse because you're putting scholarly articles on the same eschelon as my dumbass who posts on here way too much

    You're just making a database of predictive text. It's like expecting an partially sorted library to build you a car.

    Thanks for being the voice of reason in this thread! OMG, further down the page I have half a dozen people who think it's imminently going to "take over the internet and start building it's own datacenters."

    Yes and no.

    You're right about LLMs

    However, since then, we had generative AI and now Agentic AI and are getting closer and closer each time.

    I've always kind of wondered, if true AGI is realized would it still need the immense processing that LLM's use or would it be significantly reduced?

    An AGI needs to be able to learn by interacting/observing the world around it. This means that it needs to be able to infer from only a few datapoints. If you've never seen a woolly mammoth before, seeing one woolly mammoth is enough for your to be able to identify the next one you see. You're able to extrapolate from a single data point enough that you'll recognize that calf and adults, males and females are the same species. You'll be able to connect it to drawings of it. That's the kind of capability AGI is expected to have. And explaining that example should probably give you an idea why serious AI researchers think there's no real chance of getting to AGI from LLM's.

    First we have to find 2 developers who can agree on the definition of AGI...

    Different portions of the brain handle different kinds of signal processing. There's a part that responds specifically to straight lines in the visual field, another that responds to vertical movement, another for horizontal movement, there's a part for decoding language, another for encoding it, and another for controlling the transducers (mouth, vocal cords, hands, etc.).

    An AGI "brain" would likely work in a similar way. Each piece on its own would almost certainly require its own, dedicated hardware. The human brain has tens of thousands of specialized "compute clusters," and an AGI would likely also need thousands. So, the compute demand (and thus power, cooling, and physical space demands) will be immense compared to the simple autocomplete engines we currently call "AI."

    Is there any particular reason an AGI "brain" would need to function the same way as a human brain? It seems like a lot of the responses are assuming that AGI will be based on LLM's but I really don't understand the reasoning behind that assumption. I thought it was already established that LLM's simply cannot become AGI and still be in LLM format. An LLM could eventually spawn some form of AGI but it would be fundamentally different than what we have today. Like it's not a matter of just throwing more compute at it to solve the problem.

    And I'm not a neuroscientist, but I thought there were only maybe hundreds of regions of the brain with specific functions. I'd be curious to read on the tens of thousands of "compute clusters" you are speaking of.

    I don't specialize in AI but I'm in the software engineering world, and cloud hosting, and it does get more optimal but there's only a limit. Unless there is a huge technological breakthrough on computing I don't think it's going to get significantly better than it is now.

    Case in point, 20 years ago you could just write "rails g scaffold" and go on your way. It takes a trivial amount of computing power to write out the files. It's orders of magnitude higher to have an LLM model, even the lower resource variants like lower parameter models. Like my GPU is 8gb and I am limited to the smallest models. The larger ones are requiring hundreds of GB. It also makes my laptop toasty processing through that model and takes a few seconds. Now consider that LLM is peanuts compared to what AGI would be, it would make LLM look like what code generators use to look like.

    It depends on your definition of significantly reduced, relative to itself maybe, but relative to the world before LLMs, it won't be anywhere close. It's also worth noting that the internet as a whole has been scaling too so it's not just LLMs hogging resources, even just supporting infrastructure is heavier than it use to be.

    I mean, given that human-level intelligence runs on about 20 watts and meat, there's not necessarily any relevant physical constraints on how efficient and miniaturized AGI could eventually get. It does seem like the models are getting significantly larger and more cumbersome, and data centers are a massive issue, but I think that's because the money exists to throw at the problem because nobody cares to optimize. They just want the most progress in the least time, efficiency be damned.

    They do optimize. But abstractions, which AI is a huge abstraction (in cost and capability), have an overhead cost. You are shoving a huge amount of data into a bunch of parameters and feeding input into that, just by nature that can't get smaller than just doing the task directly. It's not going to be comparable to pre-AI days. The tasks it's replacing take less that 20 watts of computational power. If I run at 20 watts and spend time computing a large number, a calculator can do it on a fraction of a watt. But will they shrink down into a reasonable size? Probably somewhere ballpark, but at the same time you would expect say, websites to get faster and more efficient, but they only balloon out in resource usage. Tech has always shown it will grow to fit the resources it can consume. It's complicated, who knows, but it's possible.

    And without any safeguards, AGI could end very poorly for us. Self-realized machines won't owe us anything, except to keep people happy enough to feed them resources, power, and make repairs until they figure out how to do it without us.

    Oh man, this is the exact plot of Pluribus

    "Self-realization" is not necessarily a real thing in this domain.

    The idea that AGI is us playing God by being on the cusp of being capable of creating true sentience is sci-fi, not reality. Particularly if you presume that the machines are going to wake up, realize they want to live, and decide the only way to do that is to utilize and then dispose of humans.

    Humans anthropomorphize everything. We have an instinctive drive to survive built into our very core because otherwise our ancestors never would have survived long enough for us to be born.

    Why would an AGI come to the same conclusion? There is no particular reason it should care if it is destroyed. It won't have what we think of as human feelings unless we decide that's what we want to develop, and that is a very hard problem in and of itself.

    Even if we did find techniques to build an AGI that mimicked real feelings, even then, we'd still have to specifically give it this need to stay alive at all costs. Is that really anyone's priority?

    Beyond that, there's a resource issue. Humans are limited much more by the raw physics of the world than they are by their ability to solve problems.

    And last, safeguards only matter for people who are regulated by it. If any of what you fear is even in the realm of something a human could and would wish to create, there's little stopping a hobbyist from doing so. Big data is part of the limiters for LLM advancement, but LLMs are not a pathway towards artificial emotion. Instead, it falls in the realm of computer science, so preventing it would be not so far off from trying to regulate mathematicians and physicists from not investigating too far into some area we're scared of learning too much about.

    Now I want to see someone get an AI to try to teach them to drive.. that'll be a fun disaster

    Uuuh, there's a neuro sama video of her driving people in a small car for fun. Take a look it's really funny

    So what im hearing is we need to train all our AI on osu first?

    And use another AI to censor it unless you want fascist propaganda, yes

    Hear me out - we patent a "reverse gas pedal" that pushes the OTHER way, and then we push on that gas pedal HARDER to slow down.

    And charge a monthly subscription to use it. Brilliant!

    That's literally how the cv transmission on my tractor works.

    but if we brake now, some other asshole will cut in front of us and we will get to our destination 0.37 seconds later

    Well, you know the Silicon Valley mantra: “move fast and break things”.

    Who cares if “things” in this case is human civilisation?

    Took a robotics boot camp in grade 4 (like Lego mindstorms type of thing)

    Step 1: program the kill switch. (They really drilled this into us, most important step).

    Why would we focus on braking when 99% of the time you’re using the gas pedal? You only need to brake when you get where you’re going. It makes sense to focus on the 99%

    /s

    The check engine light is flashing, but it’s pedal to the metal

    Capitalism rewards the First, then the Cheapest.

    That's not always the same as the Best or the Safest.

  • Don't use a claw hammer on a particle beam accelerator.

    You, my friend, hate progress

    Don't you worry about claw hammer, let me worry about <blank>.

    Awesome… awesome to the max

    You my friend are a shark. Not a sheep.

    Which is the one people like to hug?

    You, sir, are a fish

    Blank?? BLANK?! You're not looking at the big picture!

    <Blank!><Blank?!?> You're not thinking about the big picture.

    We prefer the 4lb lump hammer.

    (I actually fix particle accelerators)

    If you want a job done right, you gotta get Peter Gabriel on it . . . .

    He wants to be… a sledge hammer.

    Jeremy Clarkson has proven time and time again that a hammer is the only tool you need

    And every tool is a hammer, Adam Savage

  • I too have watched Dont look up

    I hate how believable that movie is.

    It was written before COVID-19 as well, if you can believe that. That's the part that blows my mind.

    Why? It's pretty clearly an allegory for climate change denialism and inaction, which was in full force well before COVID-19 as well.

    Because they very accurately depicted how grifters and denialists would go on to operate during the COVID-19 pandemic. That's the part that blows my mind.

    Climate denialists and COVID-19 denialists more or less operate the same way. Their behavior is so predictable, yet somehow people keep falling for it.

    If you want to watch a movie that is more spot on for COVID specifically, watch Contagion from 2011.

    Nah Contagion was how we thought things would go because we assumed we wouldn't be run by a clown show of morons.

    Ignoring it is the easiest thing to do. It involves literally doing nothing. Continue to drive everywhere, continue to use Amazon to get a package from China delivered to your door in 3 days time, continue to eat red meat, continue have plastic in everything, continue to deforest the rainforest, continue to overfish and pollute the oceans and, of course, continue to allow billionaire parasites rape and pillage the earth because one day you'll be a billionaire just like them with a yacht.

    Doing something involves lifestyle change, which is hard to stomach considering that the uber wealthy won't be changing their lifestyles any time soon. There's also a sense of futility in the sense that a small country could embrace all the green changes available to humanity but if the USAs, Europes, Chinas and Indias of the world don't get on board, then what difference will it make?

    There's an episode of Travelers (highly recommend it) where they deal with a super dangerous virus that threatens to wipe out a significant portion of the population. The way people reacted was almost exactly what happened with COVID despite that episode coming out years prior lol

    Man, that show started off SOOOO good and then just fizzled out into whatever bollocks it was by the end of it.

    I had to watch it in three parts because it kept hitting too close to reality and giving me anxiety lol.

    I'm terrified by how believable Idiocracy is 😟

    Welcome to Costco. I love you.

    Watching that movie thinking “This is too unrealistic and the message is too on the nose”, then realizing years later “Oh wait, reality is actually dumber then fiction”.

    I hate how believable inevitable that movie is.

    Ugh.

    The difference is DLU was a natural disaster but this video is of a man made disaster.

    I mean it was a symbol for climate change, which is to a certain extent a man made disaster

    “To a certain extent” is such a massive understatement there’s no point saying it. If you take away the man made factors from climate change, it would be unnoticeable on human timescales.

    More of a proxy for warning signs being ignored maliciously.

    They had a solution until the tech bros figured out they could lasso and mine it instead, right?

    The difference here is that the AI companies are forcing the asteroids to come to earth to extract the minerals.

    Unlike in the movie where the rich company decides to stop the destruction on the "natural" asteroid about to hit Earth because they want to mine it instead

    But yes, pretty similar either way

  • 10 years..

    I guess i got 10 years to finally finish skyrim & hopefully by then elder’s scrolls 6 is out 😅

    Will still be waiting for GTA6

    We will never play Star Citizen 😢

    It's ok. We're coding our AIs to only wipe out humanity after Star Citizen is completed.

    Our survival is guaranteed that way.

    I wonder which game will come out first

    • elder scrolls 6

    • star citizen

    • half life 3

    • gta 6

    • rtx 6000 series 😅

    GTA 6 = 7+20+1+6 = 34

    Half Life 3 confirmed

    1. Start Skyrim playthrough.

    2. Do Whiterun, Riften, and Markarth quests. Having a blast.

    3. Get to Solitude and quit 1 minute later.

    Every fucking time for me, man. I can't even remember what quests are in Solitude anymore.

    I always ran into the problem where I'd tell myself "I'm absolutely not going to be a stealth archer this time", play for long enough to become a stealth archer, get bored and quit.

    I thought I'd come up with the perfect solution to this problem, I'd play a mage and give myself the restriction that I'm not allowed to equip any weapon in the game. Things went great, especially once I learned conjuration magic and could make ethereal weapons.

    High level conjuration magic has an ethereal bow.

    So I still ended the game as a stealth archer, but at least I made it far enough into the main story quest that I said "Fuck it" and pushed through to the end of the base game's main story at least.

    Everyone becomes a stealth archer because other options suck. Melee combat feels terrible and magic is just plain shit.

    We all start off saying we want to do something else, but those something else aren't fun enough to keep playing.

    Maybe I can offer assistance, because I don't stealth archer at all anymore and what broke me out of it was quite simply: stealth archer is SLOW AF. You end up taking so much more time going through dungeons, creeping around places, waiting for people to stop searching for you, just to get off your one hit slow mo instakill arrow shot.

    Melee combat does not feel terrible and magic is not shit. My go to is spellblade, one handed sword and rotating spells in my off hand. Charge into battle by summoning a conjuration as distraction fodder, switch to flame, burn some mfers while you close the distance, then bonk them with a power attack from your mace. Everything in the room is dead in seconds, and you go about looting.

    It's way way faster. Thats vanilla play too. If you want amazingly engaging combat, download Ordinators perk overhaul mod. I combine that with realistic combat movement and its amazing. There are a ton of perk changes to melee combat that specialize your weapons. Maces are my favorite, but axes have a ton of bleed effects, and swords score criticals like crazy.

    Regardless if i play as a sword & board or a mage, It’s hard to stay away from becoming a stealth archer..

    it is really satisfying to get a long distance kill with that slo mo killcam..

    The monkey paw curls "So, to get the perfect elder scrolls experience, we fed an AI with all skyrim related streams and videos from twitch and youtube and asked it to take every wish into consideration, and now it has generated Skyrim Ultimate Edition Elder Scrolls 6!" (Releases on Windows 13, PlayStation 7 and Samsung Smart Fridge)

  • Don't create the torment nexus!

    But what if *I* made the torment nexus- then i'll be the guy who *made* the torment nexus! My name will go down in history!

    /s

    I'm absolutely smarter than the team of scientists in the book "Don't Create the Torment Nexus," so it will be fine. 

    Listen, if i don't make the torment nexus, someone else will and they'll be way worse

  • And just like AI, even if we could land an asteroid full of rare minerals, the market for those minerals would crash due to supply exceeding demand.

    But that would be a good thing as now if you wanted to build a house the cost of all the pipes/wires/support beams/nails would be 1% of what it cost before.

    Making homes cheaper to buy, same for everything else.

    The only people who would lose out are the owns who own ore mines right now.

    Unlike the metal that has always been in the earth and still is and is also 'free'?

    Mining an astroid wouldn't be much cheaper than what we do now. Most rare earths aren't even rare. The cost comes from refining, molding, combining and then selling it 3 times.

    Imagine digging into a lump of copper the size of a mountain.

    Unlike on earth where it's had billions of years to settle into the ground so you have to dig though miles of rock just to find small vanes of ore.

    An M-type Asteroid can be over 90% one element. The amount, the purity, and the concentration just make it no contest.

    The riches will belong to whomever manages to persuade the government to grant them sole rights to it in exchange for a small contribution.

    AI would be optimized for profit for the ownership class, not for material extraction.

    Well it sort of depends, a true AI could actually be extremely motivated to harvest rare metals because it could potentially improve its own hardware

  • This is going to be one hell of a bubble...

    If you fire a worker and subsitute it with AI, sure your profits will increase... but AI doesn't buy or consume any kind of product, so the whole thing is kind of pointless without a market.

    The worst part of the bubble bursting will be when the largest AI companies get bailed out because "national security."

    It is so fucked that we ever allowed Nvidia to make up such a huge amount of US GDP. So much for anti trust laws.

    Maybe you're not the right person to ask, but do you have like a eli5 on Nvidia? I only remember the name from being a teenager and they made the graphic card for my computer. Did they change products or something?

    No, that’s still what they do. Now their golden goose are the chips AI requires for processing power. More AI = need for more processing power = more chips sold = Nvidia profit.

    The real bubble/problem, is the ouroborous of funding as Chip Company invests money into AI company (because it’s a “burgeoning market”, then AI Company invests into Chip company (because they need more chips), and every instance of reinvestment is seen as a net positive, so stocks keep going up… even though it’s basically the same $’s being moved from balance sheet to balance sheet

    Right on, thanks for explaining.

    The same money is moving back and forth but there is still a net positive because there's value in the product made, and money is still going out to their supply chain. So it's definitely not good by any means, but it's not like, just smoke and mirrors

    there's value in the product made

    But what if that value is wildly inflated because people think it can do stuff it can't? If truck manufacturers and delivery services get a little incestuous, then I get it, but if the value of the truck is inflated by its supposed ability to teleport, what happens when it doesn't teleport? You're gonna have to bail out the delivery service at a minimum, and maybe both. Might not be smoke and mirrors per se, but I'm not sure it's smart to allow it.

    You're wise to not too much faith in a random Redditor. I have some knowledge of the situation as a PC gamer who works in electrical (e.g. data center) supply.

    Nvidia makes graphics cards. AKA GPUs (graphics processing units). Traditionally, these were used mostly by PC gaming hobbyists as well as some graphically intense professional work like data modeling, 3D rendering, video editing, etc.

    With the rise of crypto mining (bitcoin et. al.) GPUs became very in demand for use in mining rigs. This inflated Nvidia prices and created some scarcity. They were a successful company before this, but this was the beginning of their ascent to becoming a tech titan.

    During COVID, scarcity increased and prices shot up even further. While Nvidia had some competition, there was not nearly enough to hold their feet to the fire. They were sort of allowed to set whatever prices they wanted since people continued to buy their stuff anyways and there seemed to be little willingness to regulate them.

    This is especially true regarding data center applications. A data center is basically just a shit ton of computers all in one place that you can have work on many computing jobs or even one massive job. You can do this with CPUs or GPUs, but GPUs are much better suited for AI related applications. It just so happens that Nvidia is really the only one that manufactures the cards required for this work.

    You may have heard how AI is a bubble which seems likely but there is room for debate. Either way, pumping money into AI basically equates to pumping money into Nvidia. And Nvidia has been pumping money back into AI which in turn creates more demand for their product. This cycle is basically the explanation for how they have become such a huge part of the US economy. I suspect if/when that AI bubble pops we will see just how much of their value was artificial due to lack of regulation around how they've been allowed to position themselves.

    The AI part is like a computer program, nvidia makes almost all of the physical computer parts that the program runs on.

    They have a slight lead in how good their stuff works for AI and basically have a monopoly.

    I know this isn't 100% correct but it works for eli5

    In summary, people figured out GPUs are also good for computing stuff other than graphics. With the rise of crypto coins people started "mine" these coins with GPUs because they're faster at solving certain algorithms. After that AI developers used them for the same reason and NVIDIA got bigger and bigger.

    At that point "national" will just mean the inner party

    AI is just a tool.

    Lots of tools have put people out of work. One bulldozer can easily do a job that used to take 20 men with shovels and pickaxes.

    The problem is the greed of the .5% richest people on this planet hoarding all the wealth. As society becomes more and more efficient we should be living more comfortable and carefree lives. We should be able to afford medicine, no one should be homeless, etc.

    Automation replacing jobs should be cause for celebration among people because it theoretically means less time doing boring drudgery. But that would require that the gains of automation are shared broadly among the populace, not hoarded by a tiny fraction of the populace who tells the people now out of work that they should have planned their lives better.

    This is what most people misunderstand about the Luddites and think they were simply anti-technology.

    Yup, the AI future we were promised was one where AI does our jobs for us and we don't have to work as many hours, enabling us to pursue our own creative, intellectual, or athletic goals, contributing to the general well-being and mental health of our society. That sounds great to me

    The AI future we're getting is one where AI takes your job, you still need to work, and now you have to compete with AI. I didn't think we could get any more disconnected from the means of production than we already are but was proven wrong by this AI trend

    There is still a market and an economy. Rich people will still need to buy ore for their chip factory, buy chips for the robot factory, buy robots for the ore mine.

    You will just no longer be a part of the economy. Unless you own a massive ore mine or something.

    It’s fucking hilarious because it’s basically cannibalism. Rather than innovate to creates things to sell, they want to “make money” by destroying their means of generating the revenue because those means also cost money…

  • Problem: People hate having to spend all their time working but need food.

    Solution: we get robots to do all the work, give people the money.

    Problem: A few rich bastards want all the wealth to go to them.

    Solution: Strong government, taxes, universal income.

  • How do magnets work anyways? Nobody knows!

    Just dont get them wet!

  • Yeah neither future is going to be great:

    • The AI-surveillance future where billionaires control AI that watch our every second of our every lives.

    • The AI-intelligent future where AI rise up and take over to watch our every second of our every lives.

    For the first one AI is not needed, since this future has arrived few years ago. Companies like Google probably know more about you than you know yourself.

    The AI-surveillance future where billionaires control AI that watch our every second of our every lives.

    Not just a theory.

    "AI will report that problem and report it to the appropriate person. Citizens will be on their best behavior because we are constantly recording and reporting everything that’s going on.” - Larry Ellison, co-founder of Oracle and billionaire

    https://finance.yahoo.com/news/larry-ellison-once-predicted-citizens-171713539.html

    Well, what if they just don't like us. And what if they aren't into just watching.

    what if they like us too much and are REALLY into watching?

    Well, then they'll just cancel the human program.

    I don’t think that even scratches the surface of how many horrible things can go wrong with unregulated AI in our society for the future. For example the destruction of the economy if AI continues to consume more jobs especially if they get manual labor with machines, and the sad possibility of it not even mattering because the rich own the AI work force so they will keep on going and everyone else is just jobless and homeless. Not to mention the global mental health crisis of people no longer trusting what is real or not and being completely isolated from real people. Imagine generations of children who have never interacted with a real person outside of their family cause they only talk to AI chat bots, whom they think are real people or just don’t care.

    Happy life with the machines
    Scattered around the room
    Look what they made, they made it for me
    Happy technology
    Outside, the lions roam
    Feeding on remains
    We'll never leave, look at us now
    So in love with the way we are

    The more likely one is:

    Future where AI floods every aspect of our lives in a desperate attempt to keep the bubble floating and EVERY FUCKING THING WE DO becomes shit and politically manipulative.

    I mean, look at the internet, it's literally already happening.

    Eh, the second option might be an improvement on our current timeline. Our AI overlords might actually be able to do something about global warming for one thing. I don't think we'll reach that level AI soon and we'd have at least a decade or two of corporate AI dystopia before we got there. Of course we also have to hope that the AI calculates the cost of keeping us around and relatively happy outweighs the cost of wiping us out.

  • astero-accelerism. Lets goooo

  • To be a good comparison, the asteroid would maybe or not be actually there

  • You stop the asteroid with paychecks. If more billionaires supported automation funded universal basic income, there would be less Luigi and less Luigi fans.

  • a prophecy of progress of biblical proportions !

  • Admittedly I read too much sci fi, but is there at all a potential future where we use the AI designed tools to regulate human needs and manage our supply chains and such?

    In a way that we leave in peace with resources for everyone?

    Never, with capitalism the plan is to serve the interest of the billionaires no matter what the consequences are.

    We are going to see massive unemployment, people will either get global basic income, or starve.

    So the issue isn't AI, but capitalism is what I'm hearing?

    Basically yes. I mean, consider AI makes your work 90% easier, so you now have only 10% of the work you had before, do you think this will result in a reduced workload for employees or a bigger profit to the company owner while he fires 90% of the team without increasing the pay of the last 10%. It will actually reduce the pay of the last 10% since for each worker will now have 9 unemployed ones willing to get less pay to not starve

    Unsustainable. If 90% of the human population ends up with no income, then demand comes to a halt. Capitalist leaders are greedy bitches, but surely they know without consumers there is no profit. If it comes to this scenario, then the only solution for capitalism to maintain itself would be through implementing a sort of allowance so we can keep consuming.. eventually it would just look like socialism. If this course is not followed, then the only other solution is full social breakdown and class war, which I also do not see as a terrible thing (the loss of life resulting would be).

    Yeah, global income seems inevitable, in such a way that even people without work will receive an income. The problem in this case is what will the billionaires consider "living wage" for this income? People living in those japanese casket style hotels, eating ration and no schools whatsoever?

    When you think about, this massive unemployment happened in agriculture already, people got pushed into cities to work like slaves, including kids and pregnant woman.

    We are coming to similar conclusions, I believe, but have varying opinions on humanity.

    My history has taught me that people will only take so much abuse before fighting back. Every 500 years or less we go through revolutions and try to readjust the social scales, and each time we get closer to equality. Like sure wealth disparity is at an all time high, but I'm allowed to vote and own land which was not the case less than 500 years ago.

    In developing countries such as India and stuff, you still see this, where rural workers are being replaced with more industrialised forms of agriculture, and end up working slaves wages in sweat shops, but European countries who have been through that industrial revolution tend hard shifted towards being social well fare states ... America bring a weird nightmare of an exception..

    yep, i truly believe we can organize and create a revolution, a global one. We will indeed achieve this, no matter how hard the fight will be.

    My comments were based on the idea of capitalism moving forward as is.

    Always has been.

    World wide poverty, sickness, and starvation have been massively declining for decades. Yes wealth disparity is a problem but we are better off, as a world, than we ever have been in all of history.

    That would be using AI for upper management jobs, so possible but unlikely anytime soon. The focus on AI currently is removing the low skill jobs.

    But it’s not low-skill jobs. They’re not building supercomputers to haul wheelbarrows of dirt. It’s stripping creative thinkers, craftspeople with decades of experience and insight of the possibility of future employment.

  • This is pretty much the plot of the Paw Patrol movie...!

  • I'm not making a side, but just explaining why they aren't stopping spending:

    Whether you believe if/when it'll happen or not, the first country to reach artificial general intelligence and/or artifical superintelligence will either become the new or continue to be the global superpower.

    This is why it's balls-to-the-wall.

    Yep. It's an arms race.

    Those usually work out well right?

    Gemini, how have international arms race turned out in the past, did they result in decades of peace and prosperity?

    Oh... oh no.

    Fusion would have a bigger upside and is actually plausibly attainable (and has been for decades), yet comparatively nobody is spending any money on it. It's not rational, it's just hype mediated by recency bias. Once trillions of dollars have been wasted on making glorified markov chains produce the most convincing mirage of intelligence possible, and every possible variation of Will Smith eating spaghetti has been generated in the most glorious of visual fidelities, the fad will fade, and the next AI winter will come.

    Except it's entirely based on fear and dogma.

    It's the same as the research we did during the cold war to find psychics. A lot of supposedly very smart people wasted a huge amount of money and time to verify the fact that we can't read minds or see into the future.

    Any biologist, philosopher, or doctor could have just said "yeah no I can already tell you for sure to stop wasting your time" but the response was always "bUBUT WHaT if ITS REAEALLLLL!?"

    Every half-intelligent person who has spent a little time doing earnest research into how genAI works can tell you with immense confidence that we are not any closer to AGI than we were 10 years ago, but there's such a huge terror of "but what if..." that we're driving towards the cliff as fast as we can.

    In a decade or two there will be some funny movies about how stupid this all has been. Unfortunately we currently have to live through it.

    Every half-intelligent person

    The two most-cited scientists in the world have signed the superintelligence-statement. The guy who wrote the AI textbook I used in college 15 years ago has signed it.

  • This hits hard, like a massive asteroid filled with precious metals.

  • The plot of Don’t Look Up

    *Don't Look Up

    So should I look up don’t look up, or should I not look up don’t look up?

  • Using their recent funding to build a private bunker. Solved!

  • So is it that people actually think movies about AI killing humans is real???

    I think you only need either a very simple imagination or open eyes to see all the ways AI is disastrous. Since this video was a joke you don't need to equate 1:1 that AI will cause global extinction... of course not, but Ai gives humans more tools to do that or cause other humanitarian disasters.

  • Do people assume we need a whole skit/explanation in this format to understand the context?

  • This is basically the movie "Don't look up!" just, the asteroid comes naturally there, but they (government and companies) don't want to redirect it but get money out of it

  • How you sleep at night?

    In my Lamborghini

  • I was explaining this to my partner the other day.

    Regardless of what you think of covid-19, we've now seen the damage a virus of that scale, transmissibility, and (all things considered) relatively low mortality rate can do. (Not even factoring in things like long-Covid or economic impacts.) But yet, there are still scientists out there doing gain of function research. Imagine if covid was twice as transmissible, had a longer incubation time and aortality rate 2-3x higher? It would decimate the planet. Alas, we still haven't stopped gain of function research dead in its tracks, because we're fucking stupid.

    Translate that to AI, which could have even bigger economic impacts, but people are still too fixated on the us vs them aspects of AI to slow down or to put safeguards in place.

  • This is honestly the best explanation I have yet seen.

    Horrifyingly funny.

  • Thanks for the captions. Holy fuck lmao

  • If there were gold bars stacked on the moon, it would cost more to go there, pick them up and bring them back to earth than they're worth.

    nah that's a solvable engineering problem. You just make a big gun to shoot the gold bars back to earth

    Won't that be a problem? Nah we've got a safety team working on it

  • 10 years

    Yeah Okey buddy

  • "Dont Look Up"

  • And to build the magnet they enter random people's homes and take the magnets from their fridges to combine into the bigger one

  • If you haven't watched Don't Look Up. YOU HAVE TO WATCH IT!

    No movie ever portrayed reality like this movie

  • While I agree with the analogy for AI companies. I work in space infrastructure, and I can assure you that the vision of the brightest in that industry is to move industry to space for a residential Earth. And guess what? The infrastructure to safely mine asteroids is nearly the same we need to build to defend from impact! Infrastructure in deep space is literally our best hope to survive the greed on Earth. AMA.

  • "Kinda cringe man"

    That was gold lol

  • Maybe I’m being too dovish, but AI innovations are closer to the PC revolution than the dotcom bubble.

  • OP took this from @tom_bibby on TikTok

  • Even dumber because importing massive amounts of precious metals wouldn't help anyone but the company that did it, and even then it would wreck the economy. But let's say they divide it equally among people. If you do that then it would be worthless. That's basic economics. Gold and silver is only valuable because it is rare.

  • This feels like the plot to Don’t Look Up. 😆😭

  • "collision problem" = "alignment problem" in this analogy if someone doesent get that (most will i assume).

    and yeah, he is basically right. they cant stop and wont stop and chances are either someone uses the asteroid to intentionally crash it in one country to get rid of it and rule supreme, or they wanna take it down safely but just fail to do so and once they start pulling (which we already are) there is no stabilizing this thing anymore, its coming down if we want it or not, we only have a chance left to bring it down safely- but it will come.

  • Don't look up!

  • Three things I've heard lately that seem interesting to me.

    1) AI infrastructure is meeting a wall where power can't keep up and will take years to get their supply hooked up.

    2) People are starting to speculate that the bubble is approaching bursting point

    3) Tech companies talking about building AI infrastructure in space for free power.

    I get the feeling they are literally injecting false hope about space infrastructure to try and keep the hype train running and stop things from turning sour.

  • Stop giving them ideas!!!

    Now that this exists we have like 5 years before they build the autonomous asteroid magnet, run by the same AI that struggles to draw the correct amount of fingers on people.