After many years, I think that the real core of the argument for "AGI risk" (AGI ruin) is appreciating the power of intelligence enough to realize that getting superhuman intelligence wrong, ON THE FIRST TRY, will kill you ON THAT FIRST TRY, not let you learn and try again.
From there, any oriented person has heard enough info to panic (hopefully in a controlled way). It is *supremely* hard to get things right on the first try. It supposes an ahistorical level of competence. That isn't "risk", it's an asteroid spotted on direct course for Earth.
There is further understanding that makes things look worse, like realizing how little info we have even now about what actually goes on inside GPTs, and the likely results if that stays true and we're doing the equivalent of trying to build a secure OS without knowing its code.
But that's nearly window-dressing compared to the heart-stopping jolt of realizing that an unaligned superintelligence is around as survivable as a supernova, that getting it right involves difficult work, and that if humanity gets it wrong ON THE FIRST TRY there are no do-overs.
That's just such an insanely insanely lethal challenge to face *in real life*, as opposed to sketching out plausible-sounding scenarios on paper. I suspect that a major reason why others are more optimistic is that, on some level, they haven't realized it's real life.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eliezer Yudkowsky

Eliezer Yudkowsky Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ESYudkowsky

21 Apr
I realize this take of mine may be controversial, but the modern collapse of sexual desire does seem to suggest that our civilization has become bored with vanilla sex and we must move towards a more BDSM / Slaaneshi aesthetic in order to survive.
look, I'm not saying we couldn't ALSO solve this problem by removing the supply constraints that turn housing, medicine, childcare, and education into Infinite Price Engines; but we're clearly NOT going to do that, leaving Slaanesh as our only option
so many commenters suggesting things that are FAR less politically realistic than a mass civilizational turn towards Slaaneshi decadence. get inside the Overton Window you poor naive bunnies.
Read 13 tweets
18 Apr
Suppose the US govt announces a nontaxable $300/month universal benefit for all 18+ citizens (no job or nonjob requirement), paid for by a new tax on land values (so monetarily neutral). What is the effect on wages?
I'm sorry, I definitely should've clarified this: "Wages" as in "wages-per-hour" rather than as in "total wage income of all laborers".
If you don't expect this operation to be monetarily neutral, please answer for real wages.
Read 4 tweets
15 Apr
Exercise for the would-be macro thinker: irregardless of whether you agree with the final conclusions (as always, when evaluating the argument itself), can you say what's utterly invalid about this thread's line of argument?
Gosh, no right answers so far. Part 1 at the very basics: Wages are flows, money supply is a stock. It would still be nonsense, but *less* nonsensical, to compare "bank accounts held by the bottom 80% of laborers" to M1 or M4. Wages are money flowing in and out, not existing.
Part 2 of the very basics: By design of the current system, nearly all money is debt. Nearly all the money you "have" in your bank account is a debt somebody else owes the bank. So if workers did have bigger bank accounts, that would be somebody else's bigger debt.
Read 5 tweets
13 Apr
To say aloud a thought that seems worth saying aloud: My inner Science Genre-Savvy guesser, has a suspicion that long Covid / post-Covid syndrome, might turn out to be *real but not special*. By which I mean that if our science-larping establishment manages to...
...LARP realistically enough to gather the relevant data at all, it could turn out that influenza that lays you out in the hospital, is liable to cause "post-influenza syndrome" at a similar rate due to lingering organ damage... or, maybe, that there are many people who had a...
...common cold one year and spent the rest of their life mostly bedridden; with doctors telling them it was all in their head, because people's Just-World Hypothesis declares a common cold "shouldn't" be able to lay you out permanently like that. And that Covid-19 was just...
Read 5 tweets
22 Mar
I'm not sure how true this in real life, but I wonder if it might end up a good heuristic: "Any organization too powerful to lower itself to explain the arguments and evidence it used to reach its conclusion, is also too social-reality-driven for its conclusions to be trusted."
If the CDC issues a pronouncement about fomites, say, but they don't include any traceback to a paper describing the evidence they used to know that... there *might* be a paper like that somewhere. Or it could be like that time they announced that masks didn't help with Covid-19.
Could government pronouncements about vaccines be correct? Totally! But the way you *know*, if so, is that you separately heard from a source citing a particular study that showed 95% reduction in symptomatic C19. Only that part is part of a process where words mean things.
Read 9 tweets
20 Mar
@paulg There once was a startup that had no business plan. "Why worry?" they said. "You can't predict the future. Who, in 2018, expected Covid-19? Our future business difficulties are impossible to correctly imagine. Why pretend we can?"

"A business plan is not *the* future," 1/
@paulg ...replies Paul Graham. "A business plan checks model consistency: is there a plausible world where success is possible? And a business plan writes out your assumptions; which matters, not because every assumption will be true, but so you notice if one is turning out false." 2/
@paulg (Of course, in real life, you've already decided not to fund the startup at this point. They're obviously quite doomed. It's not like you've got no choice but to bet the fortunes of your entire extended family on repairing this particular startup. But still, let's continue.) 3/
Read 26 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(