Last November, he claims everyone in the field agrees we know how to make AGI.

This November, everyone in the field actually agrees we just need a few breakthroughs that we can will get in the next 2-20 years.

So either grifting, or the views changed so much within a year they’re wildly unreliable.

  • I don't get what's the big holdup, just feed gpt5 all of it's own code and ask it to fix it and design a slightly better model. Then, you do the same for the model gpt5 spits out, and repeat over and over until you get the singularity, right?? Should be pretty easy...

    Poe's law strikes again

    I'm kinda surprised we don't see that idea being discussed earnestly in the wild more often. Like, the "we just need to build a machine that's able to build a slightly better version of itself, and they'll just take off from there" idea kinda was the original concept people had of the singularity, something like 60 years ago, and the internet is FILTHY with outrageous claims of LLMs being able to limitlessly improve your code/website/app/software product. Seems inevitable that people would put two and two together.

    I mean, it's completely moronic, but it's AI cultists we're talking about, we've all seen them defend dumber stuff.

    As a matter of fact, the wild LLM-bro claims I am reading circulate the idea of recycling LLM slop for "emerging" (the magic word!) self-improvement, ever more often. They have never perceived noise which cannot be improved by just more scaling!

    Unjerk/ The real problem is that iteration is slow. Sure you can tweak the training a bit, but you only know if it works after spending a fraction of a percent of the national electric production.

    /unjerk the real problem is “fix it” is ambiguous. The system could evolve, but in a survival of the fittest algorithm kind of way, not the “smartest” algorithm. IMO you’d end up with hyperslop.

  • Sam wasn't wrong he just knew erotica was the answer and kept it to himself until October.

    The median view for the boots on the ground was “more porn”

    Makes sense, you want the AI to self-replicate, you need to get it ~in the mood~

  • Agi in

    3

    2

    1

    0.5

    0.000043767

  • > everyone whose livelihood depends on the hype are saying the same things as the person who owns the company whose livelihood depends on the hype

    wow

  • “~median” is such a good tell for a really stupid guy trying to sound smart

    Not as smart as “modal”.

    Not just that, but "~median view", as if they're plotable and comparable in some way. 

    Came here for this. And he used ~ as to indicate “approximately”… does he even know what “median” actually means? He would sound smarter just saying “average” like the rest of us plebs.

  • @ylecun said about 10 years.... None of them are saying ASI is a fantasy, or that it's probably 100+ years away.

    I thought LeCun explicitly rejects all this ASI stuff?

    "out of all the people who said that ASI is 10 years or less away that I listed, none of them say it's more than 10 years away!"

    Lecun actually retweeted this and said “precisely”

    Ugh it's so confusing because no one even has a real definition of "AGI', so what would it even mean to achieve it? How would we know? Is LeCun talking about the same thing as the rest of these guys? Are they talking about the same thing as each other?

    The funniest definition of course is the only really quantifiable one I've seen - Microsoft and OpenAI secretly internally define AGI as when the system makes $100B in profit

    Here's how I get clarity:

    Pick any reasonably involved human job. Let's think together how far we are from automating that job completely, to be done as well as a 'typical human.' AGI is being able to do that for every job, simultaneously.

    "A lot of the disagreement is in what those breakthroughs will be and how quickly they will come."

    So...they agree that it will happen eventually, but they disagree on when and how. Which is pretty much disagreeing on every detail, and only agreeing that their goal is somehow possible eventually.

    I feel like it's a surprising amount of agreement that most experts would say "sometimes in the next 25 years".

  • All they need, now that they know, is more of your money.

  • It is difficult to get a man to understand something when his giant cash incinerator that just sets billions of dollars on fire depends on his not understanding it