In futures anything like the present world, a major reason I expect AGI to have a discontinuous impact, is that I expect AGI is not *allowed* to do huge amounts of economic work before it is *literally* revolutionary in the sense of political revolution.
What's AI gonna do in the current political environment? Design safe, reliable, effective vaccines in an hour instead of a day? That already wasn't the part that takes time; the part that takes time is the FDA, which doesn't go any faster with non-politically-revolutionary AI.
Is an AI going to build houses? We can already build houses, technologically speaking. It's just illegal.

Is an AI going to practice better medicine? With no MD??!?

Can an AI tutor kids? Colleges aren't about human capital. Knowledge isn't worth money, accreditation is.
At some point you have an AGI powerful enough that it can build nanomachines that act directly on the physical world, bypassing regulators and militaries. Is a precursor of the same tech powerful enough to create a self-driving car? Yes, but not powerful enough to legalize it.
Would a medical AI get legalized eventually? Maybe in 5 years or 10 years. Newer tech gets prototyped faster than that. If the medical AI came about as precursor tech of strong AGI, I think the 'revolutionary' later tech gets developed, before the earlier tech is *legalized*.
And that's why I expect world GDP to tick along at roughly the current pace, unchanged in any visible way by the precursor tech to AGI; until, on the most probable outcome, everybody falls over dead in 3 seconds after diamondoid bacteria release botulinum into our blood.
Or, optimistically, if the makers succeed on alignment, they do something else drastic that prevents the world from being destroyed by the *next* AGI. This pivotal act must bypass regulators. "The capacity to bypass regulation" is a threshold that produces discontinuous outputs.
Before AGI, there will be experimental prototype AIs that can design vaccines and pilot cars and build houses; but their work won't be legal. What will be legal is code-writing assistants and anime waifus, and I'm not sure about the waifus. That won't lift up world GDP by much.
As these predictions are specific and also about the Future, which is notoriously much harder than the Past to predict with specificity, please treat them with all due epistemic caution that should be given to any futuristic predictions mentioning the legality of AI anime waifus.
Still, in much broader generality, I do suspect - though I am not certain - that precursor tech to AGI may not produce a visible jump in world GDP; because we've made a world economy that will dilly-dally about integrating AI inputs, until AGI gets strong enough to bypass it.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eliezer Yudkowsky

Eliezer Yudkowsky Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ESYudkowsky

30 Aug
@wolftivy @willdjarvis The scenario you generate is, in two critical ways, so unlike the real world that the answer to your scenario wouldn’t generalize to the real world. One, I expect that any "true" AGI (as it would appear in the laboratory) scales to superintelligence by running deeper and wider.
@wolftivy @willdjarvis Second, AI is not currently on a trend to be controllable or alignable at all, at the superintelligent level. If alignable at all, it will be barely, narrowly alignable, on some task chosen and constrained to a bare minimum of ambition for alignability.
@wolftivy @willdjarvis So in the real world, when you "have" AGI first, what you mostly have is a non-survivable planetary emergency where you cannot scale it and then have it do anything nice, and as soon as somebody else steals or reproduces the code and scales it, everybody dies.
Read 8 tweets
29 Aug
I shall observe for the record that those same people who proclaim "Don't worry about AGI, we have no idea how intelligence works and no idea how to build one!" also have 'no idea' how to build OpenAI Codex or how GPT-3 works, under that same standard.
Should one conversely say "To build Codex you just train it real hard to do a thing, and the way GPT-3 works is a bunch of layers that got trained real hard by gradient descent" - if you can build things with that little knowledge, maybe it's possible to build AGI that way too.
Ah, but is AGI different? By what law of mind? Didn't you just get through telling me you had no idea how AGI worked? Then don't tell me you know it can't be a bunch of layers, and can't pop up if you train real hard on gradient descent on a loss function.
Read 5 tweets
28 Aug
I own a handful of large synthetic gems, in part because some part of me is tickled by owning items that are cheap now, but would've been readily recognized as hugely valuable 1000 years ago. What, if anything, tops lab-created rubies for that ratio?
Penicillin would not be immediately recognized as valuable. Same problem with a giant tungsten cube. You can't trade it right after you get tossed back in time.
"Purple dye" is the most plausible reply so far imo (3x value by weight of gold), though I'm not sure how much they cared back then about getting the exact shade or properties of snail-derived Tyrian purple.
Read 5 tweets
17 Jun
After many years, I think that the real core of the argument for "AGI risk" (AGI ruin) is appreciating the power of intelligence enough to realize that getting superhuman intelligence wrong, ON THE FIRST TRY, will kill you ON THAT FIRST TRY, not let you learn and try again.
From there, any oriented person has heard enough info to panic (hopefully in a controlled way). It is *supremely* hard to get things right on the first try. It supposes an ahistorical level of competence. That isn't "risk", it's an asteroid spotted on direct course for Earth.
There is further understanding that makes things look worse, like realizing how little info we have even now about what actually goes on inside GPTs, and the likely results if that stays true and we're doing the equivalent of trying to build a secure OS without knowing its code.
Read 5 tweets
21 Apr
I realize this take of mine may be controversial, but the modern collapse of sexual desire does seem to suggest that our civilization has become bored with vanilla sex and we must move towards a more BDSM / Slaaneshi aesthetic in order to survive.
look, I'm not saying we couldn't ALSO solve this problem by removing the supply constraints that turn housing, medicine, childcare, and education into Infinite Price Engines; but we're clearly NOT going to do that, leaving Slaanesh as our only option
so many commenters suggesting things that are FAR less politically realistic than a mass civilizational turn towards Slaaneshi decadence. get inside the Overton Window you poor naive bunnies.
Read 13 tweets
18 Apr
Suppose the US govt announces a nontaxable $300/month universal benefit for all 18+ citizens (no job or nonjob requirement), paid for by a new tax on land values (so monetarily neutral). What is the effect on wages?
I'm sorry, I definitely should've clarified this: "Wages" as in "wages-per-hour" rather than as in "total wage income of all laborers".
If you don't expect this operation to be monetarily neutral, please answer for real wages.
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(