Dr. Émile P. Torres Profile picture
"Human Extinction: A History of the Science and Ethics of Annihilation." Routledge. Philosopher & historian of all things human extinction-related. he/him
6 subscribers
Oct 24 10 tweets 2 min read
I sent a paper of mine to an Oxford philosophy prof earlier this year, and he *loved it*. Told me I should submit it without edits, and that he'd be citing it in future papers of his. So, I submitted to an ethics journal -- desk rejection. I submitted it to another journal, and this time it got reviewed: one reviewer liked it, but opted to reject(??), while the other reviewer said, basically, that the paper is complete trash. Since then, I've sent it out to 5 other journals -- all desk rejects. I'm about ready to post it on SSRN so that this Oxford prof
Apr 6 25 tweets 7 min read
What Musk and Beff Jezos aren't saying is that Silicon Valley is OVERRUN by human-extinctionist ideologies! The dominant visions of our future among the tech elite, espoused by both Musk and Beff, ARE EXTINCTIONIST. A 🧵 on my newest article for @Truthdig: truthdig.com/articles/team-…
Too much of the environmentalist movement has morphed into a human extinctionist movement 5:37 PM · Apr 5, 2024 · 38.7M  Views Replying to @elonmusk  No file chosen Beff Jezos — e/acc ⏩  @BasedBeffJezos · 21h Many such movements... initially noble goal, progressively co-opted by the extinctionist mind virus This is absolutely crucial for journalists, policymakers, academics, and the general public to understand. Many people in the tech world, especially those working on "AGI," are motivated by a futurological vision in which our species--humanity--has no place. We will either be
Feb 25 7 tweets 2 min read
Something happened recently that has brought me to tears on several occasions. Basically, person A is struggling with serious health issues, and person B, who is close to person A, has expressed unconditional support for A, no matter how bad things get. This is not normal (!!)—I don’t mean that normatively (a claim about what ought to be), but statistically (a claim about what is the case). Many, many, MANY people--friends, family, partners, etc.--leave and abandon others in times of need. When I very young, an older relative of mine told me that I
Jan 5 6 tweets 2 min read
Fascinating. Ghosting in a long-term relationship is, I think, one of the most traumatic experiences one can have. It will never not be the case that I moved to a foreign country for someone who ghosted me when I got sick, after *years* of living together. It's changed my entire worldview, tbh. I wasn't a philosophical pessimist or nihilist when I entered Germany, but--ironically--I left Germany as one. Hard to express how much ghosting has impacted me. Studies, though, suggest that ghosting can harm ghosteres, too. More here: truthdig.com/articles/what-…
Dec 16, 2023 26 tweets 8 min read
Last month, Sam Altman was fired from OpenAI. Then he was rehired. Some media reports described the snafu in terms of a power struggle between Effective Altruists and "accelerationists"--in particular, "e/acc." But what is e/acc? How does it relate to EA?

truthdig.com/articles/effec… And what connections, if any, does e/acc have with the TESCREAL bundle of ideologies?

There are two main differences between e/acc and EA longtermism. The first concerns their respective estimates of the probability of extinction if AGI is built in the near future. Picture of Yudkowsky with "We're all gonna die" next to his face.
Nov 10, 2023 31 tweets 9 min read
The Effective Altruist movement has had a terrible year. Its most prominent member, Sam Bankman-Fried, was just found guilty of all 7 charges against him. But the FTX fiaso wasn't the only scandal that rocked EA. A short 🧵about my newest article:

truthdig.com/articles/effec… One might say that "a single scandal is a tragedy; a million scandals are a statistic." From a PR perspective, it's sometimes *better* to have a whole bunch of scandals than just one major transgression, because people start to lose track of the what and when. Hence, I thought The Bankman-Fried debacle was just one of many controversies to have embroiled the EA movement over the past year and a half. In fact, there were so many that it’s difficult to keep track. And so we run the risk of the scandals being buried in the collective memory and forgotten.  This is a strategy employed — whether consciously or not — by people like Elon Musk and Donald Trump. To modify a line often attributed to Joseph Stalin, one scandal is a tragedy; a million scandals are a statistic — and statistics don’t have the same psychological force or impact that tragedies do.
Jun 11, 2023 21 tweets 6 min read
Curious about all these "warings" and "statements" signed by "AI experts" that AGI threatens humanity with "extinction"? In this rather philosophical article, I explain what they're talking about -- and it's almost certainly not what you think. A short 🧵

salon.com/2023/06/11/ai-… Could AGI actually cause "human extinction"? That's a "how" question that I'll discuss in another article. What I focus on here is what "human extinction" means in the first place. When TESCREALists say "We must avoid human extinction," what exactly are they talking about?
Jun 10, 2023 7 tweets 4 min read
A short thread of articles that mention the "TESCREAL" acronym. First up, one from @davetroy in the Washington Spectator (@WashSpec):

washingtonspectator.org/understanding-… POTIC (People of Color in Tech, @pocintech) has a really nice discussion of the TESCREAL bundle of ideologies here:

peopleofcolorintech.com/articles/timni…
May 31, 2023 12 tweets 4 min read
From LessWrong. A short 🧵 on the significance of this post and its unsettling question: Why is violence against AI ... After saying he doesn't advocate for violence, Yudkowsky has endorsed nanobots causing property damage to AI labs. The reference to "instant death" is that a rogue AGI could create self-replicating atmospheric diamondoid bacteria (what??) that blot out the sun, or something. 🤖🦠 So it is - again - explicit...
May 30, 2023 7 tweets 2 min read
"Extinction" would directly affect the elite, which is why they care about it mitigating "extinction" risks. "Sub-extinction" risks from AI that harm marginalized peoples don't get signatures like this, from Sam Harris, Grimes, and Toby Ord.

Here's a crucial point: if AGI kills everyone, then marginalized groups lose (along with everyone else).Alternatively, if resources are poured into preventing hypothetical sci-fi scenarios involving AGI, marginalized groups ALSO lose, because they'll continue to be ignored. Either way, certain groups will lose
May 8, 2023 5 tweets 1 min read
nonsense. One doesn't need an authoritarian *state* for eugenics to be profoundly illiberal. If "enhancement" is even possible (Susan Levin provides *many reasons* that it won't be), those who choose not to be "enhanced" (so-called "legacy humans," lol) will be left behind, thrown into the gutter, abused, exploited, and decimated (perhaps through murder -- just read what Bryan Caplan wrote about "IQ realists"). No one who knows anything about our world today or about human (esp. Western) history could doubt this. People with power do not treat the
May 7, 2023 5 tweets 2 min read
This is from 2017, but probably more relevant now than back then. Bryan Caplan, who once said that Robin Hanson (the "gentle, silent r*pe" & "sex redistribution" guy) had "schooled" him on Men's Rights, admits to being an IQ realist. He then writes that: I’m an IQ realist, all the ... Some of the comments are f*cking atrocious: Society, due to exterminati...
Apr 25, 2023 13 tweets 2 min read
I told someone the other day: "I literally couldn't care less about mitigating existential risks." They were baffled: "How can you not care about that?? What, you want 'civilization' to collapse and lots of casualties?" No, of course not! Let me explain by analogy: Imagine you start seeing people talk about "threats to human flourishing." The term starts popping up all over the place. Then someone declares: "Frankly, I couldn't care less about mitigating threats to human flourishing! Pfff!" That sounds odd to you: who would be against
Apr 15, 2023 17 tweets 4 min read
Here's another philosophical reflection piece on the ethics of human extinction -- a companion piece to my recent Aeon article, which drew from chapter 7 of my forthcoming book. A short 🧵. truthdig.com/articles/the-b… First, if you're curious about the background, give 👇 a quick read. It offers a brief but, I hope, useful overview of the topography of "Existential Ethics," or the study of whether our extinction would be right or wrong, good or bad, better or worse.

aeon.co/essays/what-ar…
Apr 14, 2023 5 tweets 1 min read
So, this is a really, really good question. Despite what you might think, longtermists DO NOT care about the extinction of Homo sapiens. If, say, Homo sapiens were to die out completely while at the same time leaving behind some successor species of intelligent machines that colonize space and create "astronomical" amounts of value, this wouldn't be a tragedy, but something to celebrate. The surprising truth is that longtermists, in this sense, don't care at all about "our" extinction. In my book, the difference is captured by a distinction between
Apr 14, 2023 17 tweets 6 min read
Lots of people right now are talking about advanced AI destroying humanity. But few know anything about the looooong history, going back to the 19th century, of inaccurate predictions that doom is imminent because of science and technology. A short 🧵. Image from the movie Metrop... If 2015 is when "AI safety" went mainstream, 2023 is when "AI doomerism" did the same. The fact is that we just don't know whether advanced AI systems will destroy humanity, if they are even possible in the first place. Part of what makes apocalyptic prophesies so seductive is Yudkowsky: we're all gonna ...
Apr 13, 2023 8 tweets 2 min read
Some "fun" facts as we approach the middle of April. In the US, April is considered the beginning of the "killing season," as terrorism experts call it, which lasts through September. This is when roughly 80% of all far-right terrorist attacks, murders, bombings, and massares have happened. April 15 to April 20 is especially dangerous. Why? April 15 is Tax Day, and right-wingers hate the federal government. April 19 and 20 have even more significance to extremists because:

- April 19 is when the battles of Lexington and Concord
Apr 13, 2023 34 tweets 9 min read
On this week’s episode of “WTF is TESCREAL?,” I’d like to tell you a story with a rather poetic narrative arc. It begins and ends with Ted Kaczynski—yes, THAT GUY, the Unabomber!—but its main characters are Eliezer Yudkowsky and the “AI safety” community.

A long but fun🧵. by far the greatest danger ... In 1995, Ted Kaczynski published the “Unabomber manifesto” in WaPo, which ultimately led to his arrest. In it, he argued that advanced technologies are profoundly compromising that most cherished of human values: our freedom. washingtonpost.com/wp-srv/nationa… The Unabomber Trial: The Ma...
Apr 12, 2023 4 tweets 2 min read
In January, an old racist email written by Bostrom was made public. Bostrom's apology was arguably more atrocious. Defenders screamed that he'd be cancelled, but I calmly reassured them: "No, I PROMISE you he will not be cancelled. Please TRUST me." Well?

nytimes.com/2023/04/12/wor… I was right. @UniofOxford launched an investigation and ... crickets. TV programs have interviewed Bostrom since then, as did @bigthink in a video released just 3 days ago. Now, the @nytimes has handed him a megaphone. So, folks, for the 1-millionth time,

Apr 1, 2023 28 tweets 9 min read
Lots of people are talking about Eliezer Yudkowsky of the Machine Intelligence Research Institute right now. So I thought I'd put together a thread about Yudkowsky for the general public and journalists (such as those at @TIME). I hope this is useful. 🧵 First, I will use the "TESCREAL" acronym in what follows. If you'd like to know more about what that means, check out my thread below. It's a term that Timnit Gebru (@timnitGebru) and I came up with.
Mar 31, 2023 13 tweets 0 min read