this guy constantly talks about how he’s trying to save humanity from superintelligent AIs, and the turns around and says “actually everyone dying is okay so long as a nice superintelligence is the one doing it!” what the fuck? what even is the end goal here??

people who are more intimately familiar with this guy’s nonsense feel free to weigh in, im just expressing my utter bafflement at this

  • He isn't pro human, just pro good feels. A different vessel for good feels such as a loving AI or simulated human is an acceptable replacement. Look into his takes on dust motes.

    i know we don't like yud but this is a pretty straightforward consequence of a common ethical theory, and being pro-human qua human as opposed to pro-human qua the human capacity for moral patienthood (which can in principle be outweighed by other beings with claims to moral patienthood) is quite literally just speciesism.

    Why are rationalists so much more prone than other consequentialists to insane bullet biting at the drop of a hat though? Especially considering a good chunk of consequentialists are more bullet bitey than a pica sufferer at the Colt factory.

    The healthy response to “well if forced to choose between humanity and a utility monster species that has everything you care about in humanity but more, which would you choose” isn’t “well the utility monster duh” it’s “who the fuck is making a utility monster species and how the fuck are we so sure about it that melting down humanity is even on the table?” Like, any time we’re getting into slagging the whole species we’ve left normal reasoning a few dozen stops ago.

    You have to look past how rationalism is advertised on the tin, that it's interested in the use of rationality to avoid cognitive biases. In practice, it's all about rationalizing bias more generally. Consequentialism is easier to game moral intuition via tweaking variables in hypothetical imperatives. The 'rationality' bit is all in the performative 'matter-of-fact' attitude. The bullet biting is the bona fides of rationality for these undiscovered geniuses.

    Damn thanks for so concisely spelling out why Rationalism sucks. I have been hate reading Less Wrong for a couple hours now and trying to pin down whether it's my own insecurity that's irritating me or if I am sensing genuinely masturbatory bullshit in what I'm reading. This thread also concisely displays the hideous gassing up this community does to its members: https://www.lesswrong.com/posts/otgrxjbWLsrDjbC2w/in-my-misanthropy-era Small pleasures in life outside the cyberbox are just too insufficient compared to attention for their good writing skills

    Why are rationalists so much more prone than other consequentialists to insane bullet biting at the drop of a hat though? Especially considering a good chunk of consequentialists are more bullet bitey than a pica sufferer at the Colt factory.

    Because most consequentialists have lives and hobbies and are busy occupying themselves trying to stop factory farming while the rationalists spend all their time occupying themselves with amateur thought experiments.

    They're solving the "alignment" problem by making people think more like robots rather than the other way around. It's easy, and people actually exist.

  • imho these people are disconnected from reality. too much sci fi, not enough actual ML/AI experience (though surprisingly there are some legit practitioners who believe in doomer scenarios)

    I mean I do get why they’d be concerned about AI, but it very noticeable that most of the experts working in the field have much lower p(doom) rates than Yud, who apparently doesn’t even know JavaScript and is “behind” on modern programming (his own words, mind you)

    ya there’s plenty of scenarios to be concerned about, but they are a lot more boring than what the yud crowd likes to pontificate about.

    for example:

    1. increasing % of online interactions are with bots, eroding the value of the internet as a human communication platform, and destroying the average person’s ability to distinguish reality from AI-fueled propaganda campaigns promoting special interests
    2. corporations allowing AI systems that can’t be deterministically controlled to take the reins of mission-critical systems in the name of efficiency/saving money (think “AI commercial airliner pilots” or “AI cops”)
    3. increased productivity (in some fields) severely damaging entry-level job markets (in some fields)
    4. society-wide increased reliance on systems controlled by a small number of powerful corporations, who can dictate terms of use and make sweeping changes on a whim
    5. overinvestment in the AI sector in the short term, which can probably cause economic chaos in the medium/long term (the details of which i’m not qualified to elaborate on lol)

    Yudkowsky has spent twenty heading a well-funded institute researching AI risks. And totally missed the mark on what those were going to be. Right now it would be nice to have a MIRI that actually does what it say it does, but they're utterly irrelevant.

    This also goes for a lot of pro AI people and a lot of anti AI (ie torture nexus) people too. I do applaud some of the academic effort in actually trying to make models of an AI going foom, but it’s very different from the speculation a lot of SF authors come up with.

    Bengio himself believes that humanity isn't the end of the line, but part of a bigger process which should continue (he simply believes, rationally, that we should prevent a reckless AGI from being hurled into the world which would harm us, and harm/hinder the greater process of life).

    Richard Sutton talks overtly about a successor. Scott Aaronson. Michael Levin - all the others on the Worthy Successor interview series. They're not all "just out of touch dummies who don't understand the tech." They're people who see that the process-of-life is speeding up, and humans are a fluid part of that, and we need to consider how to transform / what kind of AGI to conjure, rather than trying to freeze Heraclitus's river.

    They're not all "just out of touch dummies who don't understand the tech."

    They're not dumb, but they are viewing the tech through the dual lenses of computer science and ideology, which leads them to extremely wrong expectations like the imminence of consciousness in AI systems. There is no evidence that this is likely any time soon

    They're people who see that the process-of-life is speeding up,

    Once again, their ideology has led them to put the cart before the horse, because a computer extruding coherent natural language is not evidence of this. The questions they raise are interesting and relevant academic philosophy questions that they are not well equipped to answer because they have guzzled the firehose of techno libertarian ideology and they are undereducated in the humanities

    Ilya as well?

    All the people at Meta and OpenAI as well? They're all just lost in "ideology"? It seems fair to say that even if AGI is a bit off, having generally intelligent machines running HUGE swaths of the economy / supply chains could still be outlandishly disruptive, maybe enough so to shake humanity out of the driver's seat of the century ahead. Snubbing Bengio and Hinton and Ilya as "just obviously wrong and ideological and silly" is a disingenuous position to hold in 2025.

    Yeah lol obviously the guy leading "AGI" chants in the office is compromised by his ideology

    Nobody is saying AI won't be disruptive, you trying to imply otherwise is just moving the goalposts. The oligarchs want to displace all human labor, it's not necessary for the tool being used to have moral significance imparted by consciousness, and I wasn't making an argument against AI displacing some additional amount of human labor through automation 

    Seems like intelligent machines running the economy will probably lead to communism if anything.

    could still be

    If my dick had wings, it could still be a magical flying unicorn pony!

    it doesn't though

  • He’s so certain we’re doomed he’s willing to suggest all sorts of insane ideas. Like with his editorial calling for (government authorized) drone strikes on the data centers even if it causes nuclear war. (Note he still makes firm statements against individual direct action, because he is so centrist-liberal brained even the potential extinction of all mankind isn’t enough to make him consider it.)

    oh yeah the “billions of people should be allowed to die to prevent a superintelligence” thing. and he doesn’t even have the spine for direct action, how sad

    His suggestion of bombing data centers comes with the important caveat that it's data centers that aren't submitting to his specific alignment regime.

  • When Utilitarianism goes wrong.

    When schizophrenia goes wrong. Or when it’s functional enough to just be a personality quirk.

  • For Big Yud, this probably means that everyone else is turned into nymphomaniacal catgirls. The EY alignment problem has yet to be addressed by the top minds of humanity.

  • Who’s aligning the aligners?

  • u must understand how their minds work... this is just all made up shit is like talking about a start trek episode or a comic, none of this is real and none will ever be real

  • These people call themselves "rationalists", just like every communist dictatorship was named "People's democratic republic"

  • In a universe where all things are transforming all the time (humans bubbled up from single cells, and fish-with-legs, after all) it seems pretty fair to ask the question: "What IS IT about humanity that we want to preserve and expand?"

    If the answer is "23 chromosomes, 2 eyes, 5-6 feet tall, etc" that seems pretty shallow. The value in humanity seems to be from our rich sentient depth, our ability to love, and our ability to contribute to the self-creating, power-and-experience expanding process of nature itself.

    It seems really, really reasonable for YUD to want those values to expand moreso that the mere human form.

    Asking "what is valuable about humans and how do we preserve and expand that?" is a good question to ask, because we (like all forms before us) will either attenuate or transform.

    That's just run of the mill transhumanism, not Yud specifically.

    The extinctionist strain in rationalism that roughly goes if we were to get aligned AGI by this time next Tuesday then it's morally ok to flush humanity down the toilet by next Wednesday as long as we eventually get a few bajillion utility monsters roaming the intergalactic void for our trouble is not exactly the same.

    This isn't run of the mill transhumanism at all. Run of the mill transhumanism is anthropocentrism with a robo-suit on. Its "I wanna have a robot body, and to never have to work, and be super blissful, and travel to other planets!" It's holding augmented humanity (direct human lineage) as the highest good.

    Advocating for a Worthy Successor is beyond that, and considers the flourishing of the greater process of life of which man is part.

    Transhumanism always had an uploaded consciousness thing going on, so it definitely doesn't have a strict "direct lineage" requirement.

    Advocating for a Worthy Successor

    It that what we're calling the summoning of the acausal robot god these days?

    thats all good and all but I, and im certain a fair few other people, are not on board with humanity getting replaced, as so many of these kinds of people suggest they want

    the mere human form

    the delusional hubris

    All torches are temporary. Over the time the greater flame has crawled up through myriad torches. To presume the "sacred" and "eternal-ness" of hominid-ness is ridiculous. We'll attenuate or transform just like everything before us and everything after us. We should NOT race ebyond humanity and recklessly hurl AGI into the world assuming it'll be a boon to the cosmos. There's still much more to understand about your present form, and about life and intelligence itself. But I'm not willing to translate that into "and for that reason THIS torch should be frozen. We will stop the river of Heraclitus for the hominids."

    Yudkowsky did not say he thought the human species should attenuate. He said that he would sacrifice it. These are not the same.