, 35 tweets, 14 min read Read on Twitter
A few thoughts on the recent Atlantic piece by Kissinger, Schmidt, and Huttenlocher [THREAD]

theatlantic.com/magazine/archi…
It’s better than the last piece on AI by Kissinger, “How the Enlightenment Ends.” So I think that says something about the value of interdisciplinary thinking on how AI and the need to ground analysis in the actual tech today, not science fiction. A good takeaway for us all.
I always have mixed feelings about articles like this. On the one hand, I guess it’s good that it’s highlighting these issues for a general audience. (🤷‍♂️) On the other hand, many of the points in this article are probably not how I would have framed them.
Obviously, all of the authors are established scholars in their respective fields. So I don’t mean any disrespect to them. But hey, I also think about AI policy for a living, esp. in war, so I have thought about these things a bit.
I agree with their call to avoid anthropomorphizing AI. I think a lot of the article actually does that in practice. Makes me wonder if that is a product of different authors, one who tends to think of AI in more anthropomorphic ways and another who is warning against it.
Overall I think the article extrapolates quite a bit beyond current AI/ML when they talk about “AI”. They are raising problems and posing questions that might be an issue someday in the future with more advanced forms of AI, but I don’t think are really an issue today.
For example, they talk about “a future in which machines help guide their own evolution”. I mean … I’m impressed as hell by the cutting edge work in Deep RL, like @OpenAI’s Dota 2 work, but I wouldn’t describe current AI/ML systems that way. Might that happen? Maybe. Maybe not.
@OpenAI In any case, I don’t think the article does a good job of distinguishing between the challenges posed by AI today vs. some hypothetical more advanced “AI” that might come along in the future. It just sort of lumps them all together.
@OpenAI It seems like a lot of the issues they’re raising aren’t about AI systems today, but rather extrapolating forward. Which is fine – the field of AI is moving at a very fast clip -- but I don’t think they’re sufficiently clear about the distinction.
@OpenAI In other areas, I think they raise what they describe as “philosophical” questions that I think come from conceiving of AI in a rather anthropomorphic way, even if they say they’re not.
@OpenAI For example, they ask: “How can we explain AlphaZero’s capacity to invent a new approach to chess on the basis of a very brief learning period?” It seems to me the answer is that RL is a really powerful method for exploring the space of possible actions to accomplish a goal.
@OpenAI I think the question of where AlphaZero’s chess abilities come from only poses a philosophical challenge if you think AlphaZero has some metaphysical cognitive capability. Which it doesn’t. Question solved. Next.
@OpenAI However, I think they’re wrong in not using the word “intelligence” to describe what we’re seeing with systems like AlphaZero. I realize that’s a loaded word that means lots of different things to different folks, but I like Shane Legg and Marcus Hutter’s definition of AI (2007).
@OpenAI (As an aside, here's the paper if you want to go down a rabbit hole of reading ~70 definitions of "intelligence": arxiv.org/pdf/0706.3639.…)
@OpenAI The Legg-Hutter def. of intelligence differentiates between crystalized forms of (task-specific or domain-specific) intelligence vs. general intelligence (ability to learn lots of different things). (See this other paper from them: arxiv.org/pdf/0712.3329.…)
@OpenAI Coming from that perspective, I would argue that AlphaZero has a very very narrow form of intelligence.
@OpenAI I think it’s entirely reasonable to say that AlphaZero is more “intelligent” at playing chess (or Go or Shogi, depending on the version of AlphaZero) than a human. It isn’t just doing brute force look-ahead.
@OpenAI AlphaZero seems to have some capacity to do some form of cognition that is relevant to playing (and winning) the game.
@OpenAI Whatever cognitive task we mean when we say that human chess players are "thinking" about a move to make, the machine is doing a cognitive task that is at least functionally equivalent (or better).
@OpenAI I’m not a fan of the ever-shifting definition of intelligence, where what counts as “intelligent” is basically whatever only humans can do, so it keeps receding like a mirage on the horizon that we never reach.
@OpenAI I would prefer a definition of intelligence, which the Legg/Hutter paper ends up at, which acknowledges many different kinds of intelligence. And humans are only one form of a giant space of possible intelligences. We’re not that special.
@OpenAI I’m not the first person to say it, but I think we’re on the verge of a Copernican revolution in how we think about intelligence. AI is forcing us to realize that humans aren’t the end-all be-all of intelligence. We’re just one version of many possibilities.
@OpenAI So I would say that the systems people are building with AlphaZero and Dota 5 have a *form* of intelligence. It is very narrow and task-specific and they can’t generalize or transfer their skills to other task, but let’s give credit.
@OpenAI If we think that playing Go or chess or Dota involve intelligence – at least in some way – then these systems have that, whatever it is. They aren’t just following human programming or training.
@OpenAI They're not doing metacognition (thinking about thinking) and I'm not saying they have some degree of consciousness or self-awareness, but I think that's shifting the goalposts on intelligence. That's not how I would define it.
@OpenAI I also have mixed feelings about the portion on nuclear strategy & deterrence. I think what they’re describing about the risks from the opacity of not being able to assess & measure AI are somewhat valid, but they make too much of it and don’t consider countervailing factors.
@OpenAI For example, maybe AI systems will improve nations’ abilities to collect intelligence on others w/ more advanced ISR, data fusion, prediction, etc. and will increase transparency among states, making miscalculation and surprise less likely, which would be stabilizing?
@OpenAI To their pt about the difficulty in assessing others' military capabilities, I would chalk that up to having more to do with software & digital systems than AI, per se. When military advantage is more about the software than hardware, how do you measure capability? I don’t know.
@OpenAI The challenge in measuring military power in an age of software is a real problem today and likely to get worse, but existing systems like the F-35 or cyber tools raise that now. We don’t need to invoke AI to get there.
@OpenAI Overall, I think the article overstates the degree of challenges from opacity of AI in the military space.
@OpenAI I’ve spent a lot of time thinking about how the “black box” problems of AI and autonomy might contribute to miscalculation and accidents with weapons, but phrases like “weapons of unknowable potency” make me cringe. What are they talking about?
@OpenAI In general, I think the article adopts a somewhat mystical tone to AI that I don’t really support. I wonder if you did a find and replace for “AI” with “magic” how the article would read. Because there’s a bit of that vibe.
@OpenAI I’m not one to pooh-pooh the progress made in recent years with deep learning, particularly deep reinforcement learning. I think things like AlphaZero and the OpenAI Five are really impressive and I would even use the word “intelligence” to describe them (in a very limited way).
@OpenAI But AI isn’t magic. AI poses a lot of challenges in the world. I’m glad the authors are raising them, but I’m focused on challenges a bit more down to Earth.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Paul Scharre
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!