Consider the following line from a recent NYT article by Ezra Klein. He’s talking about people who work on “AGI,” or artificial general intelligence. He could have just written: “Many—not all—are deeply influenced by the TESCREAL ideologies.”
Where did “TESCREAL” come from? Answer: a paper that I coauthored with the inimitable @timnitgebru, which is currently under review. It stands for “transhumanism, extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism, and longtermism.”
Incidentally, these ideologies emerged, historically, in roughly that order.
There are at least four reasons for grouping them together as a single “bundle” of ideologies. First, all trace their lineage back to the first-wave Anglo-American eugenics tradition.
I touched on this a bit in a recent article for @Truthdig . Transhumanism was developed by eugenicists, and the idea dates back at least to a 1927 book revealingly titled “Religion Without Revelation.” Transhumanism was introduced as a secular religion.
The first organized transhumanist movement appeared in the late 1980s and early 1990s. It was called “extropianism,” inspired by the promise that advanced tech could enable us to become radically enhanced posthumans. This is from a 1994 Wired article about the Extropians:
Reason two for conceptualizing TESCREAL ideologies as a bundle: their communities overlap both across time and contemporarily. All extropians were transhumanists; the leading cosmist was an extropian; many rationalists are transhumanists, singularitarians, longtermists, and EAs;
longtermism was founded by transhumanists and EAs; and so on. The sociological overlap is extensive. Many who identify as one of the letters in “TESCREAL” also identify as others. It thus make sense to talk about the “TESCREAL community.”
Third reason: as this suggests, the worldviews of these ideologies are interlinked. Underlying all is a kind of techno-utopianism + a sense that one is genuinely saving the world. Over and over again, you find talk of “saving the world” among transhumanists, Rationalists, EAs,
and longtermists. Here's an example from Luke Muehlhauser, who works for the EA organization Open Philanthropy, leading their "grantmaking on AI governance and policy." He used to work for the Peter Thiel-funded Machine Intelligence Research Institute.
The vision is to subjugate the natural world, maximize economic productivity, create digital consciousness, colonize the accessible universe, build planet-sized computers on which to run virtual-reality worlds full of 10^58 digital people, and generate “astronomical” amounts of
“value” by exploiting, plundering, and colonizing. However, TESCREALists also realize that the very same tech needed to create Utopia also carries unprecedented risks to our very survival. Hence, as with most religions, there’s also an apocalyptic element to their vision as well:
the tech that could "save" us might also destroy us. This is why transhumanists and longtermists introduced the word “existential risk.” Nonetheless, the utopia they imagine is so tantalizingly good that they believe putting the entire human species at risk by plowing ahead with
AGI research is worth it. (Imagine: eternal life! Unending pleasures! Superhuman mental abilities! No, I am not making this up—read “Letter from Utopia,” a canonical piece of the longtermist literature.)
The fourth reason is the most frightening: the TESCREAL ideologies are HUGELY influential among AI researchers. And since AI is shaping our world in increasingly profound ways, it follows that our world is increasingly shaped by TESCREALism! Pause on that for a moment. 😰
Elon Musk, for example, calls longtermism “a close match for my philosophy,” and retweeted one of the founding documents of longtermism by the (totally not racist!) transhumanist Nick Bostrom, who’s lionized by Rationalists and EAs.
Musk’s company Neuralink is essentially a transhumanist organization hoping to "kickstart transhuman evolution." 👇 Note that transhumanism is literally classified by philosophers as a form of so-called "liberal eugenics."
Sam Altman, who runs OpenAI, acknowledges that getting AGI right is important because “galaxies” are at stake. That’s a reference to the TESCREAL vision outlined above: colonize, subjugate, exploit, and maximize. "More is better," as William MacAskill writes.
Gebru and I point out that the “AI race” to create ever larger LLMs (like ChatGPT) is, meanwhile, causing profound harms to actual people in the present. It’s further concentrating power in the hands of a few white dudes—the tech elite. It has an enormous environmental footprint.
And the TESCREAL utopianism driving all this work doesn’t represent, in any way, what most people want the future to look like. This vision is being *imposed* upon us, undemocratically, by a small number of super-wealthy, super-powerful people in the tech industry who genuinely
believe that a techno-utopian world of immortality and “surpassing bliss and delight” (quoting Bostrom) awaits us. Worse, they often use the language of social justice to describe what they’re doing: it’s about “benefitting humanity,” they say, when in reality it’s hurting people
right now and there’s absolutely ZERO reason to believe that once they create “AGI” (if it’s even possible) this will somehow magically change. If OpenAI cared about humanity, it wouldn’t have paid Kenyan workers as little as $1.32 an hour: time.com/6247678/openai…
The “existential risk” here *IS* the TESCREAL bundle. You should be afraid of these transhumanists, Rationalists, EAs, and longtermists claiming that they’re benefiting humanity. They're not. They're elitist, power-hungry eugenicists who don’t care about anything but their Utopia
The framework of the “TESCREAL bundle” thus helps make sense of what’s going on—of why OpenAI, DeepMind, etc. are so obsessed with “AGI,” of why there’s an AI race to create ever larger LLMs. If the question is “What the f*ck?,” the answer is “TESCREALism.” 🤬
I forgot to add: if you'd like to read more on the harms of this AI race, check out this excellent and important article by Timnit Gebru, @emilymbender, and @mcmillan_majora.
I'll write a Truthdig article about this soon, but for now it's worth (I think) introducing a distinction that will help make sense of the pro-extinctionism at the heart of the TESCREAL movement. This concerns what I call "terminal" and "final" human extinction. These refer to
two distinct extinction scenarios. The first--terminal extinction--would happen if our species were to disappear entirely and forever. The second--final extinction--adds an additional condition. It would happen if our species were to disappear entirely and forever *without* us
leaving behind any successors. Final extinction entails terminal extinction, but terminal extinction does not entail final extinction. Those are the only technical details that one needs to know to understand the stunning surge of pro-extinctionist views these days...
I can't stress enough that the whole push to build AGI & the TESCREAL worldview behind it is fundamentally pro-extinctionist. If there is one thing people need to understand, it's this. Here's a short 🧵explaining the idea, starting a this clip of Daniel Kokotajlo:
Here's Eliezer Yudkowsky, whose views on "AI safety" have greatly influenced people like Kokotajlo, saying that he's not worried about humanity being replaced with AI posthumans--he's worried that our replacements won't be "better."
In this clip, Yudkowsky says that in principle he'd be willing to "sacrifice" all of humanity to create superintelligent AI gods. We're not in any position to do this right now, but if we *were*, he'd push the big red button.
“If God manifested himself as a human in the form of Jesus Christ, then why not as a posthuman in the form of superintelligent AI?”
It's happening: Christianity and TESCREALism are merging in the MAGA movement -- or so I argue in my new article for @Truthdig. A 🧵on this👇
I note that there are two factions within the MAGA movement: traditionalists like Steve Bannon and transhumanists like Elon Musk. The first are Christians, and the second embrace a new religion called "transhumanism" (the "T" in "TESCREAL").
You might think that the ideologies of traditionalists and transhumanists are incompatible, but that's not the case. Already, these ideologies are merging: traditionalists are embracing key aspects of transhumanism while MAGA transhumanists are turning toward Christianity.
Silicon Valley is run by people who genuinely think the world as we know it is going to end in the next few decades. Many also WANT this to happen: they WANT the biological world to be replaced by a new digital world. They WANT "posthumans" to take the place of humans. A 🧵:
As I write in my newest article (linked below), some scholars refer to evangelical Christian Zionists as the "Armageddon Lobby." But there's a new Armageddon Lobby that's taken hold in Silicon Valley. These people embrace what Rushkoff calls The Mindset.
Champions of The Mindset accept a descriptive eschatology--or narrative of the world's end--according to which humans are just the temporary transitional species between the biological and digital worlds. Our role is to birth the digital beings who will soon come to dominate.
I've now watched most of this. One thing that's striking is the shear number of assumptions that the authors make--assumptions about extremely complex issues. They also use phrases like "I feel that X" to express some of their conclusions. Reminds me of "The Age of Em," which
an influential AI safety researcher once described to me as having the highest bullshit-to-word ratio of any book he's ever read. In many ways, this report (discussed in the podcast) is worse than theology, although it manages to give one the prima facie impression of rigor.
Like, WHAT IS THIS? How is this serious scholarship? Very bizarre, based on nothing but wild speculation--though I have to admit it's way more grounded than Yudkowsky's claims about the future. Here's the other scenario that the authors discuss (next tweet):
This memorandum is GREAT. It is very, very important--so I highly recommend it. However, the analysis is problematic--NOT because it's incorrect but because it's incomplete. Neoreaction could be seen as a roadmap for how to get to a certain destination. But what is that (short🧵)
destination? The answer comes from the TESCREAL worldview, which Dave Troy has written about before (I recommend his article!). The end-goal is a techno-utopian civilization of posthumans spread throughout our entire lightcone. No, I am not kidding--I know this because I used to
be a TESCREAList! This is what Musk wants. It's what Marc Andreessen and Thiel and all the others are after. It's why Musk keeps claiming that we've secured the "future of civilization" by electing Trump president. If you want the full picture, read this: