Émile P. Torres (they/them) Profile picture
Apr 13 34 tweets 9 min read Twitter logo Read on Twitter
On this week’s episode of “WTF is TESCREAL?,” I’d like to tell you a story with a rather poetic narrative arc. It begins and ends with Ted Kaczynski—yes, THAT GUY, the Unabomber!—but its main characters are Eliezer Yudkowsky and the “AI safety” community.

A long but fun🧵. by far the greatest danger ...
In 1995, Ted Kaczynski published the “Unabomber manifesto” in WaPo, which ultimately led to his arrest. In it, he argued that advanced technologies are profoundly compromising that most cherished of human values: our freedom. washingtonpost.com/wp-srv/nationa… The Unabomber Trial: The Ma...
(He also whined A LOT about “political correctness,” which dovetails in an amusing way with Elon Musk’s claim that what’s needed to counter some of the negative effects of AI is “anti-woke” AI. Many lolz.)

Now, someone named Bill Joy read Kaczynski's article and
was moved by its neo-Luddite arguments. As a 2000 WaPo article noted, “Joy says he finds himself essentially agreeing, to his horror, with a core argument of the Unabomber—that advanced technology poses a threat to the human species.”

This is noteworthy because decorative.
Joy wasn’t some anti-technology anarcho-primitivist “Back to the Pleistocene” type. He cofounded Sun Microsystems and is a tech billionaire! He just became super-worried that “GNR” (genetics, nanotech, & AI) technologies could pose unprecedented threats to our survival.
Indeed, not long after, Joy also perused a pre-publication draft of Ray Kurzweil’s “The Age of Spiritual Machines” (1999). Kurzweil is a TESCREAList who popularized singularitarianism—the “S” in “TESCREAL”—and envisions a future in which accelerating technological development Image
will enable us to merge with machines and become superintelligent immortals: the Singularity.

To simplify a lot: Kurzweil argued that the development of world-changing advanced tech is *inevitable*, and that the outcome would either be ANNIHILATION or UTOPIA. In contrast,
Kaczynski argued that the creation of advanced tech would result in either ANNIHILATION or DYSTOPIA, and that its development is *not* inevitable, because people could choose to revolt against the megatechnics of industrial society instead: 4. We therefore advocate a ...
On April 1, 2000, in Wired magazine (of all dates and places!), Bill Joy published a hugely influential article titled “Why the Future Doesn’t Need Us.” Freaked out by Kurzweil’s accelerationism, and inspired by Kaczynski’s warnings that advanced tech = doom, Joy called for decorative.
a complete and indefinite halt to entire fields of emerging science/technology. There is simply no safe way forward, and hence we must *ban* research on whole domains of genetic engineering, nanotechnology, and artificial intelligence.
Transhumanists, Extropians, and longtermists (although the last term didn’t exist at the time) more or less mocked this proposal. “It is totally impracticable—a complete nonstarter!,” they exclaimed, based on a techno-deterministic worldview according to which the freight train
of “progress” simply cannot be stopped. At best, they argued, it could be *slowed down* a little bit here and there, an idea that Nick Bostrom popularized under the heading of “differential technological development.”

en.wikipedia.org/wiki/Different…
*Far more importantly*, TESCREALists argued that technoscientific “progress” SHOULD NOT be stopped. Why? Because GNR tech has salvific powers! It is our vehicle from the misery of the human condition today to a techno-utopian paradise of immortality, superintelligence, and
“surpassing bliss and delight,” to quote Bostrom’s “Letter from Utopia.” It would be an absolute moral catastrophe to relinguish, as Joy contends, entire fields of advanced science and technology. Joy is, essentially, arguing that we should forever remained imprisoned Letter from Utopia
in the meat-suits of biology. He is denying us the opportunity to “transcend” our human limitations. How dare he!
But here’s the catch: these early TESCREALists almost unanimously *agreed* with Joy about the profound dangers of GNR technologies. Indeed, Bostrom argued that the probability of an existential catastrophe (likely from a GNR disaster) is *at least* 25%, the balance of evidence is ...
while Kurzweil wrote that “a planet approaching its pivotal century of computational growth—as the Earth is today—has a better than even chance of making it through. But then I have always been accused of being an optimist.” decorative.
So, everyone was on the same page about the twenty-first century being the most dangerous in humanity’s history *because of these GNR technologies*.
The key difference between Joy and the early TESCREALists was how they *responded* to this predicament. Joy opted for broad relinquishment: “BAN THESE TECHNOLOGIES, because they’re too dangerous.” TESCREALists said: image from The Matrix.
“NO NO NO, we NEED these technologies to create Utopia!! Rather, what we should do is found an entirely NEW FIELD of empirical and philosophical study to (a) understand the attendant risks of GNR tech, and (b) devise strategies to neutralize these risks.
That way we can keep our technological cake and eat it, too!!”

This is how the field of Existential Risk Studies was born: it was the TESCREALists’ response to the unprecedented perils of GNR tech outlined most eloquently by Bill Joy in 2000. Existential Risks Analyzing...
And so we arrive at the present. OpenAI has initiated an “AI race” among Big Tech companies, and the capability gains between, say, GPT-2 and GPT-4 have really freaked out a whole bunch of TESCREALists. Suddenly, they're starting to worry that the technological cake might turn
around and eat THEM!

The result has been a full-circle loop all the way back to Joy’s position in 2000: “Stop! Too dangerous!! Hit the emergency brakes!!” A weak version of this is expressed in the recent “open letter” from the Future of Life Institute, which called for
“all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”

futureoflife.org/open-letter/pa…
This is “weak” because, although the phrase “all AI labs” is very strong indeed, the “pause” is not indefinite—only 6 months. After all, we should WANT advanced AI systems—AGI—to be developed eventually because they’re our ticket to paradise! “Slow down but don’t stop,” is what
FLI is saying.

However, there’s a growing contingent of TESCREAL “doomers” led by Yudkowsky, who participated in the TESCREAL movement of the 1990s and early 2000s, who have come *completely* full circle, back to the position
advocated by Joy and inspired by Kaczynski's neo-Luddite arguments. For example, in a TIME magazine op-ed, Yudkowsky writes that *everything* needs to be “shut down” immediatly, and he endorses military strikes to take out datacenters!

time.com/6266923/ai-eli… Shut down all the large GPU...
Yudkowsky even suggests that we should risk thermonuclear war to prevent advanced AI from being created!! Frame nothing as a conflict...
On Twitter, he’s advocated for the destruction of AI labs (targeting property, not people), and claims that nuclear war would be acceptable to avoid an AI apocalypse so long as there were enough survivors to repopulate the planet. Yudkowsky advocating for pr...Yudkowsky arguing that if a...
This is very, very reminiscent of Kaczynski's views (!), although *so far* no one in the TESCREAL doomer camp has ACTUALLY sent bombs in the mail or tried to assassinate AI researchers. Buuuuuuut ...
don’t be too comforted by that fact: in meeting minutes from an “AI safety” workshop in Berkeley—where Yudkowsky’s “Machine Intelligence Research Institute” is based—some participants literally suggested that one way to prevent doom-from-AI is to (see below) start building bombs from y...
“start building bombs from your cabin in Montana and mail them to DeepMind and OpenAI.” Montana, of course, is where Kaczynski lived during his campaign of domestic terrorism. Another bullet-point simply reads: “Solution: be ted kaczynski.” Solution: be ted kaczynski
So there you have it. The story began with Kaczynski in the mid-1990s, and has come full-circle back to Bill Joy’s Kaczynski-inspired proposal for how best to RESPOND to the dangers of advanced tech like artificial intelligence. Existential Risk Studies? That's NOT ENOUGH. We
need to shut everything down now. There's even some chatter that perhaps Kaczynski’s tactics might be necessary to avoid DOOM.

It’s poetic because TESCREALists laughed at Joy and denigrated his idea, yet some prominent TESCREALists are now advocated for the very same idea. 😬 Yudkowsky: We're all gonna ...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Émile P. Torres (they/them)

Émile P. Torres (they/them) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @xriskology

Apr 13
Some "fun" facts as we approach the middle of April. In the US, April is considered the beginning of the "killing season," as terrorism experts call it, which lasts through September. This is when roughly 80% of all far-right terrorist attacks, murders, bombings, and massares
have happened. April 15 to April 20 is especially dangerous. Why? April 15 is Tax Day, and right-wingers hate the federal government. April 19 and 20 have even more significance to extremists because:

- April 19 is when the battles of Lexington and Concord
commenced back in 1775. Many on the "radical Right subscribed to the idea that a new revolution will someday unfold against the unjust federal government" (quoting Frances Flannery).
- April 19, 1985, is when federal and state law enforcement closed in on an apocalyptic
Read 8 tweets
Apr 12
In January, an old racist email written by Bostrom was made public. Bostrom's apology was arguably more atrocious. Defenders screamed that he'd be cancelled, but I calmly reassured them: "No, I PROMISE you he will not be cancelled. Please TRUST me." Well?

nytimes.com/2023/04/12/wor…
I was right. @UniofOxford launched an investigation and ... crickets. TV programs have interviewed Bostrom since then, as did @bigthink in a video released just 3 days ago. Now, the @nytimes has handed him a megaphone. So, folks, for the 1-millionth time,

stop crying about cancel culture. No one is getting cancelled--Bostrom included! He's going to be okay; his career will be just fine. The real take-away is that if you're going to do something bad, just make sure you don't get caught until AFTER you're in a position of power.

🤷‍♂️
Read 4 tweets
Apr 1
Lots of people are talking about Eliezer Yudkowsky of the Machine Intelligence Research Institute right now. So I thought I'd put together a thread about Yudkowsky for the general public and journalists (such as those at @TIME). I hope this is useful. 🧵
First, I will use the "TESCREAL" acronym in what follows. If you'd like to know more about what that means, check out my thread below. It's a term that Timnit Gebru (@timnitGebru) and I came up with.
Yudkowsky has long been worried about artificial superintelligence (ASI). Sometimes he's expressed this in truly the most bizarre ways. Here's the way he put it in a 1996 email to the Extropian mailing list: 😵‍💫 > I'm not sure I understand...
Read 28 tweets
Mar 28
This talk by Toby Ord, one of the leading Effective Altruists & longtermists, is worth watching for how much bullshit it contains.

- He takes no responsibility for the FTX debacle, when leading EAs were well-aware that SBF engaged in unethical behaviors.

- He emphasizes that one shouldn't be a "naive" utilitarian, which he describes SBF as. But everyone KNEW that SBF was a naive utilitarian and didn't have a problem with it until SBF got caught.

- He says that the FTX debacle made SBF the most visible face of EA, thus
giving the public a "distorted" impression of EA. Nonsense, the EA community elevated SBF long before FTX imploded.

- Watching this, you'd think that everything in my article below, and Charlotte Alter's in the next tweet, is false. It's truly

truthdig.com/articles/the-g…
Read 6 tweets
Mar 27
I could have been clearer about this issue: on the one side, you have a bunch of people who think that Homo sapiens should go extinct. Contemporary examples include David Benatar and @lesuknight, though they would insist that the ONLY acceptable way of going extinct is through
voluntary, peaceful means. On the other side, you have the techno-futurists, including some longtermists, who think that people like Benatar and Knight are wackjobs for advocating the most immoral thing imaginable: human extinction.

However, if you look closer, many of these
techno-futurists themselves actually think that the complete and permanent disappearance of our species, Homo sapiens, would be a GREAT THING--that is, on the condition that it coincides with the creation of superintelligent sentient machines to take our place. People who think
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(