Dr. Émile P. Torres Profile picture
Mar 13, 2023 25 tweets 7 min read Read on X
I see some folks starting to use the “TESCREAL” acronym. So, here’s a short thread on what it stands for and why it’s important. 🧵
Consider the following line from a recent NYT article by Ezra Klein. He’s talking about people who work on “AGI,” or artificial general intelligence. He could have just written: “Many—not all—are deeply influenced by the TESCREAL ideologies.” I often ask them the same question: If you think calamity so possible, why do this at all? Different people have different things to say, but after a few pushes, I find they often answer from something that sounds like the A.I.’s perspective. Many — not all, but enough that I feel comfortable in this characterization — feel that they have a responsibility to usher this new form of intelligence into the world.
Where did “TESCREAL” come from? Answer: a paper that I coauthored with the inimitable @timnitgebru, which is currently under review. It stands for “transhumanism, extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism, and longtermism.”
Incidentally, these ideologies emerged, historically, in roughly that order.

There are at least four reasons for grouping them together as a single “bundle” of ideologies. First, all trace their lineage back to the first-wave Anglo-American eugenics tradition.
I touched on this a bit in a recent article for @Truthdig . Transhumanism was developed by eugenicists, and the idea dates back at least to a 1927 book revealingly titled “Religion Without Revelation.” Transhumanism was introduced as a secular religion.

truthdig.com/dig/nick-bostr…
The first organized transhumanist movement appeared in the late 1980s and early 1990s. It was called “extropianism,” inspired by the promise that advanced tech could enable us to become radically enhanced posthumans. This is from a 1994 Wired article about the Extropians: People have dreamed such dreams before, of course: they've wanted to fly like eagles, to run like the wind, to live forever. They've dreamed of becoming like the gods, of having supernatural powers. The difference is that now, suddenly, all of it is entirely possible. For the first time in history, science and technology have caught up to the wildest of human aspirations and hopes. No ambition, however extra-vagant, no fantasy, however outlandish, can any longer be dismissed as crazy or impossible. This is the age when you can finally do it all.
Reason two for conceptualizing TESCREAL ideologies as a bundle: their communities overlap both across time and contemporarily. All extropians were transhumanists; the leading cosmist was an extropian; many rationalists are transhumanists, singularitarians, longtermists, and EAs;
longtermism was founded by transhumanists and EAs; and so on. The sociological overlap is extensive. Many who identify as one of the letters in “TESCREAL” also identify as others. It thus make sense to talk about the “TESCREAL community.” A panel that included many TESCREAL community members (or people sympathetic with TESCREALism).
Third reason: as this suggests, the worldviews of these ideologies are interlinked. Underlying all is a kind of techno-utopianism + a sense that one is genuinely saving the world. Over and over again, you find talk of “saving the world” among transhumanists, Rationalists, EAs,
and longtermists. Here's an example from Luke Muehlhauser, who works for the EA organization Open Philanthropy, leading their "grantmaking on AI governance and policy." He used to work for the Peter Thiel-funded Machine Intelligence Research Institute. So you want to save the world. As it turns out, the world cannot be saved by caped crusaders with great strength and the power of flight. No, the world must be saved by mathematicians, computer scientists, and philosophers.
The vision is to subjugate the natural world, maximize economic productivity, create digital consciousness, colonize the accessible universe, build planet-sized computers on which to run virtual-reality worlds full of 10^58 digital people, and generate “astronomical” amounts of Astronomical Waste: The Opportunity Cost of Delayed Technological Development NICK BOSTROM Oxford University http://www.nickbostrom.com [Utilitas Vol. 15, No. 3 (2003): pp. 308-314] [pdf] [translations: Russian, Portuguese, Bosnian]  ABSTRACT. With very advanced technology, a very large population of people living happy lives could be sustained in the accessible region of the universe. For every year that development of such technologies and colonization of the universe is delayed, there is therefore an opportunity cost: a potential good, lives worth living, is not being realized. Given som...
“value” by exploiting, plundering, and colonizing. However, TESCREALists also realize that the very same tech needed to create Utopia also carries unprecedented risks to our very survival. Hence, as with most religions, there’s also an apocalyptic element to their vision as well:
the tech that could "save" us might also destroy us. This is why transhumanists and longtermists introduced the word “existential risk.” Nonetheless, the utopia they imagine is so tantalizingly good that they believe putting the entire human species at risk by plowing ahead with
AGI research is worth it. (Imagine: eternal life! Unending pleasures! Superhuman mental abilities! No, I am not making this up—read “Letter from Utopia,” a canonical piece of the longtermist literature.)

nickbostrom.com/utopia
Letter from Utopia (2008) Nick Bostrom
The fourth reason is the most frightening: the TESCREAL ideologies are HUGELY influential among AI researchers. And since AI is shaping our world in increasingly profound ways, it follows that our world is increasingly shaped by TESCREALism! Pause on that for a moment. 😰
Elon Musk, for example, calls longtermism “a close match for my philosophy,” and retweeted one of the founding documents of longtermism by the (totally not racist!) transhumanist Nick Bostrom, who’s lionized by Rationalists and EAs. “Astronomical Waste”  Likely the most important paper ever written:  https://nickbostrom.com/astronomical/waste.html
Musk’s company Neuralink is essentially a transhumanist organization hoping to "kickstart transhuman evolution." 👇 Note that transhumanism is literally classified by philosophers as a form of so-called "liberal eugenics."

futurism.com/elon-musk-is-l…
Sam Altman, who runs OpenAI, acknowledges that getting AGI right is important because “galaxies” are at stake. That’s a reference to the TESCREAL vision outlined above: colonize, subjugate, exploit, and maximize. "More is better," as William MacAskill writes. What's actually going on afaict: - People who value life and sentience, and think sanely, know that the future galaxies are the real value at risk. - Nobody else can act about AGI killing everyone *very soon*, because they've given up on life, or get distracted too easily. i think agi safety is a great thing to care an immense about and future galaxies are indeed at risk; what i don’t really get is why a movement that seems to be almost entirely focused on agi risk feels the need to justify it with some other name/make it part of a broader cause
Gebru and I point out that the “AI race” to create ever larger LLMs (like ChatGPT) is, meanwhile, causing profound harms to actual people in the present. It’s further concentrating power in the hands of a few white dudes—the tech elite. It has an enormous environmental footprint.
And the TESCREAL utopianism driving all this work doesn’t represent, in any way, what most people want the future to look like. This vision is being *imposed* upon us, undemocratically, by a small number of super-wealthy, super-powerful people in the tech industry who genuinely
believe that a techno-utopian world of immortality and “surpassing bliss and delight” (quoting Bostrom) awaits us. Worse, they often use the language of social justice to describe what they’re doing: it’s about “benefitting humanity,” they say, when in reality it’s hurting people
right now and there’s absolutely ZERO reason to believe that once they create “AGI” (if it’s even possible) this will somehow magically change. If OpenAI cared about humanity, it wouldn’t have paid Kenyan workers as little as $1.32 an hour: time.com/6247678/openai…
The “existential risk” here *IS* the TESCREAL bundle. You should be afraid of these transhumanists, Rationalists, EAs, and longtermists claiming that they’re benefiting humanity. They're not. They're elitist, power-hungry eugenicists who don’t care about anything but their Utopia
The framework of the “TESCREAL bundle” thus helps make sense of what’s going on—of why OpenAI, DeepMind, etc. are so obsessed with “AGI,” of why there’s an AI race to create ever larger LLMs. If the question is “What the f*ck?,” the answer is “TESCREALism.” 🤬
I forgot to add: if you'd like to read more on the harms of this AI race, check out this excellent and important article by Timnit Gebru, @emilymbender, and @mcmillan_majora.

dl.acm.org/doi/pdf/10.114…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dr. Émile P. Torres

Dr. Émile P. Torres Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @xriskology

Oct 24, 2024
I sent a paper of mine to an Oxford philosophy prof earlier this year, and he *loved it*. Told me I should submit it without edits, and that he'd be citing it in future papers of his. So, I submitted to an ethics journal -- desk rejection. I submitted it to another journal, and
this time it got reviewed: one reviewer liked it, but opted to reject(??), while the other reviewer said, basically, that the paper is complete trash. Since then, I've sent it out to 5 other journals -- all desk rejects. I'm about ready to post it on SSRN so that this Oxford prof
can cite it.

This gets at three overlapping criticisms I have of philosophy journals: (1) they are highly conservative. If you're writing about a genuinely new topic (e.g., the ethics of human extinction -- there's no real tradition or literature on the topic; I'm trying to
Read 10 tweets
Apr 6, 2024
What Musk and Beff Jezos aren't saying is that Silicon Valley is OVERRUN by human-extinctionist ideologies! The dominant visions of our future among the tech elite, espoused by both Musk and Beff, ARE EXTINCTIONIST. A 🧵 on my newest article for @Truthdig: truthdig.com/articles/team-…
Too much of the environmentalist movement has morphed into a human extinctionist movement 5:37 PM · Apr 5, 2024 · 38.7M  Views Replying to @elonmusk  No file chosen Beff Jezos — e/acc ⏩  @BasedBeffJezos · 21h Many such movements... initially noble goal, progressively co-opted by the extinctionist mind virus
This is absolutely crucial for journalists, policymakers, academics, and the general public to understand. Many people in the tech world, especially those working on "AGI," are motivated by a futurological vision in which our species--humanity--has no place. We will either be
marginalized to the periphery by our posthuman AI progeny or eliminated entirely. These people are not pro-humanity! Consider Larry Page, the cofounder of Google, which owns DeepMind: one of the companies explicitly trying to build superintelligent machines: Page has argued that “digital life is the natural and desirable next step in … cosmic evolution and that if we let digital minds be free rather than try to stop or enslave them, the outcome is almost certain to be good.” On this account, you could see human beings as the crucial link between two worlds: the biological world that exists right now, and the digital world run by intelligent machines that will exist in the future. By building these intelligent machines — or digital minds — we are creating our successors, who will inaugurate the next stage in cosmic evolution.  According to Page,...
Read 25 tweets
Feb 25, 2024
Something happened recently that has brought me to tears on several occasions. Basically, person A is struggling with serious health issues, and person B, who is close to person A, has expressed unconditional support for A, no matter how bad things get. This is not normal (!!)—I
don’t mean that normatively (a claim about what ought to be), but statistically (a claim about what is the case). Many, many, MANY people--friends, family, partners, etc.--leave and abandon others in times of need. When I very young, an older relative of mine told me that I
should *never* show vulnerability to friends, family, or partners because, in his words, “You’ll be shocked by how many people will leave if they think you are struggling, sick, or ‘weak.’” I think that was some of the best advice I ever got (because of how true it is),
Read 7 tweets
Jan 5, 2024
Fascinating. Ghosting in a long-term relationship is, I think, one of the most traumatic experiences one can have. It will never not be the case that I moved to a foreign country for someone who ghosted me when I got sick, after *years* of living together. It's changed my entire
worldview, tbh. I wasn't a philosophical pessimist or nihilist when I entered Germany, but--ironically--I left Germany as one. Hard to express how much ghosting has impacted me. Studies, though, suggest that ghosting can harm ghosteres, too. More here: truthdig.com/articles/what-…
Honestly, what affected me the most is that after I got out of the hospital, in which I nearly died, my partner *not once* wrote me to see if I was okay. I now know that, as a matter of fact about our world, you can be with someone for *years* and they can show literally zero
Read 6 tweets
Dec 16, 2023
Last month, Sam Altman was fired from OpenAI. Then he was rehired. Some media reports described the snafu in terms of a power struggle between Effective Altruists and "accelerationists"--in particular, "e/acc." But what is e/acc? How does it relate to EA?

truthdig.com/articles/effec…
And what connections, if any, does e/acc have with the TESCREAL bundle of ideologies?

There are two main differences between e/acc and EA longtermism. The first concerns their respective estimates of the probability of extinction if AGI is built in the near future. Picture of Yudkowsky with "We're all gonna die" next to his face.
EA longtermists are "techno-cautious" in that they think the probability is relatively high and, therefore, we should proceed with great caution. E/acc's are "techno-optimistic" in that they think the probability is 0 or near 0 and, therefore, we should put the pedal to the metal The differences between accelerationism and EA fall into two areas. The most significant concerns their respective assessment of the “existential risks” posed by AGI. Accelerationists are techno-optimistic: They believe the risks are very low or nonexistent. To quote one of the thought leaders of contemporary accelerationism, Guillaume Verdon — better known by his spooneristic pseudonym “Beff Jezos” — an existential catastrophe from AGI has a “zero or near zero probability” of happening. Another leading accelerationist, tech billionaire Marc Andreessen, declares in one of his manifestos tha...
Read 26 tweets
Nov 10, 2023
The Effective Altruist movement has had a terrible year. Its most prominent member, Sam Bankman-Fried, was just found guilty of all 7 charges against him. But the FTX fiaso wasn't the only scandal that rocked EA. A short 🧵about my newest article:

truthdig.com/articles/effec…
One might say that "a single scandal is a tragedy; a million scandals are a statistic." From a PR perspective, it's sometimes *better* to have a whole bunch of scandals than just one major transgression, because people start to lose track of the what and when. Hence, I thought The Bankman-Fried debacle was just one of many controversies to have embroiled the EA movement over the past year and a half. In fact, there were so many that it’s difficult to keep track. And so we run the risk of the scandals being buried in the collective memory and forgotten.  This is a strategy employed — whether consciously or not — by people like Elon Musk and Donald Trump. To modify a line often attributed to Joseph Stalin, one scandal is a tragedy; a million scandals are a statistic — and statistics don’t have the same psychological force or impact that tragedies do.
it might be useful to catalogue (*some of*) the most significant examples of duplicity, chicanery, malfeasance, and bad behavior from the EA community over the past 1.5 years, to show just how deeply problematic EA has become (or always was!).
Read 31 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(