Dr. Émile P. Torres Profile picture
Apr 1, 2023 28 tweets 9 min read Read on X
Lots of people are talking about Eliezer Yudkowsky of the Machine Intelligence Research Institute right now. So I thought I'd put together a thread about Yudkowsky for the general public and journalists (such as those at @TIME). I hope this is useful. 🧵
First, I will use the "TESCREAL" acronym in what follows. If you'd like to know more about what that means, check out my thread below. It's a term that Timnit Gebru (@timnitGebru) and I came up with.
Yudkowsky has long been worried about artificial superintelligence (ASI). Sometimes he's expressed this in truly the most bizarre ways. Here's the way he put it in a 1996 email to the Extropian mailing list: 😵‍💫 > I'm not sure I understand...
Yudkowsky's biggest claims to fame are (i) shaping current thinking among TESCREALists about the "existential risks" posed by ASI, and (ii) writing "Harry Potter and the Methods of Rationality," which The Guardian describes as "the #1 fan fiction series of all time."
Yudkowsky used to be a blogger at Overcoming Bias, alongside "America's creepiest economist," Robin Hanson. Why is Hanson creepy, according to Slate? Because of blog posts like (not kidding) "Gentle Silent R*pe" and another about "sex redistribution." slate.com/business/2018/…
Yudkowsky is also obsessed with his "intelligence." He has frequently boasted of having an IQ of 143, and believes you should be very impressed. In a 2001 essay about the technological Singularity, he refers to himself as a "genius." I think back to before I st...
Lots of Yudkowsky's followers in the Rationalist community -- I would say cult -- agree. Here's a meme they often share amongst themselves (the original image doesn't include Yudkowsky's name on the far right): decorative
Yudkowsky has explicitly worried about "dysgenic" pressures (IQs dropping because, say, less "intelligent" people are breeding too much). Below is a rather disturbing "fictional" piece he wrote about how to get a eugenics program off the ground: lesswrong.com/posts/MdbJXRof…
You can read more about TESCREALism and eugenics (racism, xenophobia, sexism, ableism, classism, etc.) in my article for Truthdig here: truthdig.com/dig/nick-bostr…
Yudkowsky is a Rationalist, transhumanist, longtermist, and singularitarian (in the sense of anticipating an intelligence explosion via recursive self-improvement). As noted, he participated in Extropianism. I don't know he identifies as an Effective Altruist, but he's
right there at the heart of TESCREALism. Humorously, he once predicted that we'd have molecular nanotechnology by 2010, and that the Singularity would happen in 2021. He says in his recent interview that he grew up believing he would become immortal: youtube.com/clip/UgkxflODu…
Yudkowsky more or less founded Rationalism, which grew up around the LessWrong community blog that he created. As a former prominent member of the Rationalist community recently told me, it's become a "full grown apocalypse cult" based on fears that ASI will destroy everything.
Consequently, some have started to suggest that violence directed at AI researchers may be necessary to prevent the AI apocalypse. Consider statements in the meeting minutes of a recent "AI safety" workshop, which were leaked to me. Here's what it included: Problem: Human Enfeeblement...
Ted Kaczynski is, of course, the Unabomber, who sent bombs in the mail from his cabin near Lincoln, Montana. Here's another passage from the same document: Strategy: start building bo...
Is Yudkowsky fueling this? Yes. In the tweets below, he suggests releasing some kind of nanobots into AI laboratories to destroy "large GPU clusters." (Side note: fears of nanobots is why a terrorist group called "Individualists Tending to the Wild" has killed academics.) So it is - again - explicit...
Here, Yudkowsky is explicit that bombing AI companies might be justifiable, if no one were to "see it." Would you have supported bo...
In his recent article for @TIME, Yudkowsky endorses military strikes to take out datacenters. Astonishing that TIME published this. Shut down all the large GPU...
He also says that we should be willing to risk nuclear war to save the world from ASI: Frame nothing as a conflict...
Why is risking nuclear war worth it? Because Yudkowsky believes that nuclear war would probably be survivable, whereas an ASI takeover would result in TOTAL human annihilation. This is where the bizarre utopianism of the TESCREAL ideologies enters the picture. What these people
want more than anything, the vision that drives them, is a techno-utopian world in which we colonize space, plunger the universe for resources, create a superior new race of "posthumans," build planet-sized computers on which to run huge virtual-reality worlds full of
trillions and trillions of people, and ultimately to "maximize" what they consider to be "value." Here is Yudkowsky gesturing at this very idea: I disagree that you mention...
Yudkowsky's claims about ASI killing everyone are highly speculative. Paul Christiano, for example, repeatedly notes in an article that Yudkowsky is not well-informed about some of the very things he likes to pound his fist and pontificate about. 🫣 As an example, I think Elie...
Christiano repeatedly says that Yudkowsky is overconfident, even "wildly overconfident." Eliezer seems to be relativ...
However, this hasn't stopped Yudkowsky from screaming that ASI is about to kill everyone. In this clip, he's asked what he'd tell young people, and his answer is: "Don't expect it to be a long life." Holy hell.
Consistent with his TESCREALism, Yudkowsky suggests that the way to avoid an ASI apocalypse is to "shut down the GPU clusters" and then technologically reengineer the human organism to be "smarter."
There's a lot more to say, but that's enough for now. I'll leave you with this little gem from last year, which I suspect accurately portrays his current thinking: lesswrong.com/posts/j9Q8bRmw…
Here are a few things I forgot to mention, btw. First, read this article, if you can stomach it. It's shocking: fredwynne.medium.com/an-open-letter…
Second, more of Yudkowsky! Remember, as longtermist colleagues like William MacAskill and Toby Ord say, respect "common-sense morality"! Yudkowsky saying that child...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dr. Émile P. Torres

Dr. Émile P. Torres Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @xriskology

Oct 24, 2024
I sent a paper of mine to an Oxford philosophy prof earlier this year, and he *loved it*. Told me I should submit it without edits, and that he'd be citing it in future papers of his. So, I submitted to an ethics journal -- desk rejection. I submitted it to another journal, and
this time it got reviewed: one reviewer liked it, but opted to reject(??), while the other reviewer said, basically, that the paper is complete trash. Since then, I've sent it out to 5 other journals -- all desk rejects. I'm about ready to post it on SSRN so that this Oxford prof
can cite it.

This gets at three overlapping criticisms I have of philosophy journals: (1) they are highly conservative. If you're writing about a genuinely new topic (e.g., the ethics of human extinction -- there's no real tradition or literature on the topic; I'm trying to
Read 10 tweets
Apr 6, 2024
What Musk and Beff Jezos aren't saying is that Silicon Valley is OVERRUN by human-extinctionist ideologies! The dominant visions of our future among the tech elite, espoused by both Musk and Beff, ARE EXTINCTIONIST. A 🧵 on my newest article for @Truthdig: truthdig.com/articles/team-…
Too much of the environmentalist movement has morphed into a human extinctionist movement 5:37 PM · Apr 5, 2024 · 38.7M  Views Replying to @elonmusk  No file chosen Beff Jezos — e/acc ⏩  @BasedBeffJezos · 21h Many such movements... initially noble goal, progressively co-opted by the extinctionist mind virus
This is absolutely crucial for journalists, policymakers, academics, and the general public to understand. Many people in the tech world, especially those working on "AGI," are motivated by a futurological vision in which our species--humanity--has no place. We will either be
marginalized to the periphery by our posthuman AI progeny or eliminated entirely. These people are not pro-humanity! Consider Larry Page, the cofounder of Google, which owns DeepMind: one of the companies explicitly trying to build superintelligent machines: Page has argued that “digital life is the natural and desirable next step in … cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.” On this account, you could see human beings as the crucial link between two worlds: the biological world that exists right now, and the digital world run by intelligent machines that will exist in the future. By building these intelligent machines — or digital minds — we are creating our successors, who will inaugurate the next stage in cosmic evolution.  According to Page,...
Read 25 tweets
Feb 25, 2024
Something happened recently that has brought me to tears on several occasions. Basically, person A is struggling with serious health issues, and person B, who is close to person A, has expressed unconditional support for A, no matter how bad things get. This is not normal (!!)—I
don’t mean that normatively (a claim about what ought to be), but statistically (a claim about what is the case). Many, many, MANY people--friends, family, partners, etc.--leave and abandon others in times of need. When I very young, an older relative of mine told me that I
should *never* show vulnerability to friends, family, or partners because, in his words, “You’ll be shocked by how many people will leave if they think you are struggling, sick, or ‘weak.’” I think that was some of the best advice I ever got (because of how true it is),
Read 7 tweets
Jan 5, 2024
Fascinating. Ghosting in a long-term relationship is, I think, one of the most traumatic experiences one can have. It will never not be the case that I moved to a foreign country for someone who ghosted me when I got sick, after *years* of living together. It's changed my entire
worldview, tbh. I wasn't a philosophical pessimist or nihilist when I entered Germany, but--ironically--I left Germany as one. Hard to express how much ghosting has impacted me. Studies, though, suggest that ghosting can harm ghosteres, too. More here: truthdig.com/articles/what-…
Honestly, what affected me the most is that after I got out of the hospital, in which I nearly died, my partner *not once* wrote me to see if I was okay. I now know that, as a matter of fact about our world, you can be with someone for *years* and they can show literally zero
Read 6 tweets
Dec 16, 2023
Last month, Sam Altman was fired from OpenAI. Then he was rehired. Some media reports described the snafu in terms of a power struggle between Effective Altruists and "accelerationists"--in particular, "e/acc." But what is e/acc? How does it relate to EA?

truthdig.com/articles/effec…
And what connections, if any, does e/acc have with the TESCREAL bundle of ideologies?

There are two main differences between e/acc and EA longtermism. The first concerns their respective estimates of the probability of extinction if AGI is built in the near future. Picture of Yudkowsky with "We're all gonna die" next to his face.
EA longtermists are "techno-cautious" in that they think the probability is relatively high and, therefore, we should proceed with great caution. E/acc's are "techno-optimistic" in that they think the probability is 0 or near 0 and, therefore, we should put the pedal to the metal The differences between accelerationism and EA fall into two areas. The most significant concerns their respective assessment of the “existential risks” posed by AGI. Accelerationists are techno-optimistic: They believe the risks are very low or nonexistent. To quote one of the thought leaders of contemporary accelerationism, Guillaume Verdon — better known by his spooneristic pseudonym “Beff Jezos” — an existential catastrophe from AGI has a “zero or near zero probability” of happening. Another leading accelerationist, tech billionaire Marc Andreessen, declares in one of his manifestos tha...
Read 26 tweets
Nov 10, 2023
The Effective Altruist movement has had a terrible year. Its most prominent member, Sam Bankman-Fried, was just found guilty of all 7 charges against him. But the FTX fiaso wasn't the only scandal that rocked EA. A short 🧵about my newest article:

truthdig.com/articles/effec…
One might say that "a single scandal is a tragedy; a million scandals are a statistic." From a PR perspective, it's sometimes *better* to have a whole bunch of scandals than just one major transgression, because people start to lose track of the what and when. Hence, I thought The Bankman-Fried debacle was just one of many controversies to have embroiled the EA movement over the past year and a half. In fact, there were so many that it’s difficult to keep track. And so we run the risk of the scandals being buried in the collective memory and forgotten.  This is a strategy employed — whether consciously or not — by people like Elon Musk and Donald Trump. To modify a line often attributed to Joseph Stalin, one scandal is a tragedy; a million scandals are a statistic — and statistics don’t have the same psychological force or impact that tragedies do.
it might be useful to catalogue (*some of*) the most significant examples of duplicity, chicanery, malfeasance, and bad behavior from the EA community over the past 1.5 years, to show just how deeply problematic EA has become (or always was!).
Read 31 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(