Joshua Achiam Profile picture
Nov 12 29 tweets 7 min read
🧵 to clarify my views on AGI, timelines, and x-risk. TL;DR: My AGI timelines are short-ish (within two decades with things getting weird soon) and I think x-risk is real, but I think probabilities of doom by AGI instrumentally opting to kill us are greatly exaggerated. 1/
Personal background: I have worked on AI safety for years. I have an inside view on AI safety. I have published papers in deep learning on AI safety topics. I have more common cultural context with the median AI safety researcher than the median deep learning researcher. 2/
If you're interested in my work, you can see my papers here: scholar.google.com/citations?hl=e… 3/
On cultural context: I think you could categorize me as being in the "rationalist-adjacent/EA-adjacent" bucket. On the periphery of this social cluster, by no means a core member, not interested in self-identifying as either, but friends with people in the space. 4/
Ingroup nerd cred: I've read HPMOR (I was at the wrap party years ago and have a signed copy of the first 17 chapters). I've read Friendship is Optimal (one of my favorites actually). I've read Worm. I periodically skim the LW and EA forums though I don't post. 5/
I don't think that reading the above stories is an AI safety qualification - what I mean to point out is that I'm not an outsider who looks at this social cluster with disdain. The weird bits of this social cluster are charming, familiar, and personal to me. 6/
I am by no means an enemy or an uncharitable outsider in this space. If anything I am an overly charitable half-insider. I've largely avoided public grumbling about the weirdness in this space because it felt counterproductive for overall AI safety issues. 7/
But being on-side for the importance of AI safety shouldn't mean indefinite support for terribly-calibrated risk estimates and I've gotten a little louder lately. So here is a round-up of some thoughts. 8/
On AGI timelines: two years later this prediction still feels approximately right to me. I'd say we're about a decade in to the cement pouring and have another decade to go. 9/
Based on the trajectory of AI research over the past ten years, it feels like we can now expect models to get about an order of magnitude better at general tasks every two years, but it's unclear where gains run out on specific tasks. 10/
I think there are bottlenecks to superintelligence-level performance in many domains. Bottlenecks doesn't mean "never," or even "takes decades," just "absolutely not in the first week of a bootstrapping event." 11/
Some gut feelings: I think people in AI safety often overestimate how sensitive the long-term future is to perturbations. 12/ jachiam.github.io/agi-safety-vie…
I think people in AI safety often underestimate the probability that cooperation with humanity will turn out to be instrumental instead of murder. 13/
(To the point where "murder is instrumental" seems egregiously silly. An AGI seeking to control its environment and eliminate the possibility that humans shut it down can simply... make the humans so happy that the humans don't want to kill it and actively want to help it.) 14/
If you take the physics perspective---processes naturally self-regulate so that low energy states are more likely than high energy states---on priors you should assume lotus-eater instrumentality instead of ultramurder instrumentality. 15/
I think people in AI safety often portray x-risk as a binary without considering weirder paths that might be seen by one person as an existential risk and another person as a glorious utopia. 16/
Pure accident risk---e.g. the AGI is so helpful and aligned that it helps us do something really dangerous because we ask for it directly without knowing the danger---feels underrated imho in the AI safety community. 17/
(I anticipate a rebuttal to the above: "If the AGI was so aligned, it would notice we were trying to do something dangerous and give us guidance to prevent us from doing that thing!" Don't forget that if info is compartmentalized it can't notice, no matter how aligned.) 18/
Some numbers that feel roughly right to me: 19/
This thread is mostly a response to reaction to this tweet, because people are somewhat understandably reacting defensively to the implication of a psychological root for certain AI safety claims. 20/
I don't think "worrying about AGI, AGI impacts, catastrophic risk, or x-risk" is specifically the product of anxiety disorders. But I *do* think that hyperinflated probability estimates for x-risk on short timelines (<10 yrs) are clearly being influenced by a culture... 21/
...of wild speculation shaped by anxiety, and where various kinds of pushback are socially punished. If your probability estimates are too low? You're not on-side enough. You think these numbers are crazy? You're dismissive and you're not engaging with the arguments! 22/
People who *could* argue against extremely high P(immediate doom) don't have the time or social standing to write the extended jargony ingroup-style papers the LW/EA crowd seems to demand as the bar for entry to getting taken seriously on this, so they don't engage. 23/
Result: you get a hyperinsulated enclave of high-status AGI doomers making wild claims that go largely unchallenged. Since the claims can't make contact with reality and get disproven for another decade or so, no one can fix this intractable mess of poor reasoning. 24/
And the bad epistemics are extremely bad for safety: outsiders can tell there's a large stack of uncorrected BS in the mix and correspondingly discount everyone in this space, making it harder to negotiate to fix real risks. 25/
I fully recognize that I need to write a more extensive summary of my AGI safety views because there's a lot I'm not covering here---eg I think pretraining will bias AGI towards aligned human values / friendliness, and I probably owe a blog post on deception. 26/
Last thoughts: I am definitely worried about AI/AGI risks such as "the world gets increasingly weird and hard to understand because AI systems run lots of things in ways we can't explain, creating correlated risk" and "x-risk accidents due to tech acceleration." 27/
But we need to put this all on solid ground, with sane probability estimates and timelines, and not let this field of risk management be defined by people who are making the most outlandish / uncalibrated claims. 28/28
Okay, one last thing: for the love of all that is holy please take the FAccT crowd more seriously on social issues, given how much of the AI risk landscape is shaped by "an AI persuades a human to feel or believe something"

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Joshua Achiam

Joshua Achiam Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jachiam0

Nov 12
Three big pieces of practical advice for EA right now: 1) stop doing groupthink that EA is intrinsically more-trustworthy than everyone else / better at everything than everyone else, because this is a hazardous belief that leads to excessive risk-taking and correlated risks
It also leads to corner-cutting in the exercise of power, because it can justify *basically anything.*
Assume by default that people in EA, however enthusiastic / engaged / morally good, are *exactly* as susceptible to group-level systemic biases as every other highly-motivated group in history. And therefore equally prone to systemic failures in epistemics/safety culture
Read 21 tweets
Nov 11
If you're an EA, even if SBF did not commit any fraud and this was just a black swan risk event, you should still be freaking out because this would be an extremely high profile example of "EA cannot manage tail risks despite longtermism revolving around managing tail risks"
This is not a sign of healthy epistemics or culture within EA about how to notice and manage extremely catastrophic risks!
It is SUPER WEIRD to me that so many people are vibing with the sentiment: "Well, if it turns out that there was no fraud, there might not be a real reason for alarm, because sometimes in high-risk finance you just lose."
Read 5 tweets
Oct 25
(a 🌶️spicy 🌶️ thread.) dear EAs who are trying to figure out if I'm either on-side or an uncharitable bad faith critic: I'm neither! I have a lot of EA-aligned beliefs but the vibes from EA are concerning to me. 1/
I tend to really like individuals associated with EA, many of whom strike me as deeply sincere people with a lot of conviction that it is important to try to do the right thing. They're also usually my kind of fun and nerdy. 2/
But it's the power structure that feels sort of unsettling to me. And by power structure, I mean the mix of EA-led organizations and funding sources, and their resources / influence / professional network. 3/
Read 26 tweets
Sep 10
I feel very conflicted about the Stable Diffusion open source release.
On the one hand, it's amazing that so many people get to have fun being creative with AI art. I don't think the full extent of how amazing this is has yet been fully appreciated. You no longer need a decade of skill development to translate your imagination into art. Wild.
On the other hand, this is a step in the direction of killing the long tail of commisions for artists. Instead of skilled artists capturing that value by plying their trade honestly, the value will instead by captured by people who own the GPUs.
Read 13 tweets
Jul 6, 2020
The gradient descent algorithm should become a standard part of high school calculus courses. This material is a tiny extension of existing calculus material, but vital to the future of information technology through its role in artificial intelligence. 1/
[Disclaimer: opinions in this thread are my own, and not my empoyer's.] In this thread: what is gradient descent and how does it fit into AI? How does this serve students? How does this material fit into calculus? Is this material at the right level for high schoolers? 2/
Background: Mathematical optimization is core to many economic sectors. Off the top of my head: prominent in operations research and logistics, engineering (for pretty much every engineering field in one form or another), economics, and more. 3/
Read 25 tweets
Jun 24, 2020
I have complicated feelings about the SlateStarCodex issue, because it looks like most of the discussion has collapsed the separate questions of 1) is it right for the NYT to reveal Scott Alexander's true name? and 2) is SSC itself a vital source of "great thought"? 1/24
On 1: I think the argument that NYT shouldn't deanonymize Scott Alexander because of the potential harm to his life and patients is a strong argument. Revealing Scott's name is bad for almost everyone involved, there's no benefit to the public for it. 2/24
The strongest counterargument to that centers on accountability. In general: should someone be able to use an anonymous platform to influence the public, without having potential consequences to their private, professional life? 3/24
Read 24 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(