But in the bigger picture nuclear weapons are a downstream consequence of intelligence.
No intelligence, no nuclear weapons
Of course, intelligence also makes a lot of the best things possible.
We used our intelligence to build the homes we live in, create art, and eradicate diseases.
But to see the risk of AI, we have to see that there is nothing more dangerous than intelligence used for destructive purposes.
This means the following question is extremely important for the future of all of our lives: What goals are powerful, intelligent agents pursuing?
(That has always been the case. Throughout history the worst problem you could have was an intelligent opponent intent to harm you.
But in history these opponents were intelligent individuals or the collective intelligence of a society.)
The question today is, what can we do to avoid a situation in which a powerful artificial intelligence is used for destructive purposes?
There are fundamentally two bad situations we need to avoid:
1) The first one is obvious. Someone – perhaps an authoritarian state, perhaps a reckless individual – has control over very powerful artificial intelligence and uses the technology for bad purposes.
As soon as a malicious actor has control over powerful AI, they can use it to develop everything that this intelligence can develop — from weapons to synthetic pathogens.
And an AI system's power to monitor huge amounts of data makes it suitable for large-scale surveillance.
2) The other situation is less obvious. That’s the so-called alignment problem of AI.
Here the concern is that *nobody* would be able to control a powerful AI system.
The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to pursue destructive goals.
The risk is that we try to instruct the AI to pursue some specific goal – *even a very worthwhile one* – and in the pursuit of that goal it ends up harming humans.
The alignment problem is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.
To summarize: I believe we are right now in a bad situation.
The problems above have been known for a very long time – for decades — but all we’ve done is to speed up the development of more and more powerful AI and we’ve done close to nothing to make sure that we stay safe.
I don’t believe we will definitely all die, but I believe there is a chance.
And I think it is a huge failure of us today to not see this danger:
On @OurWorldInData we've done a lot of work on artificial intelligence, because we believe the immense risks and opportunities need to be of central interest to people across our *entire society*.
The two situations above are those that I believe would make our lives much, much worse.
But they are of course not the only possible risks of AI. Misinformation, biases, rapid changes to the labour market, and many other consequences also require much more attention and work.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I don't know how to summarize this post in a thread. But I can share the two visuals I made for it. 👇
• Demographers estimate that 117 billion humans have been born.
• Almost 8 billion are alive now.
To bring these large numbers into perspective I made this visualization.
A giant hourglass. But instead of measuring the passage of time, it measures the passage of people. /2
How does our past and present compare with the future?
We don't know. But what I learned from writing this post is that our future is potentially very, very big.
I try to convey this here. But even this visualization shows only a small fraction of humanity's potential future.
/3
Estimated excess deaths offer one relevant perspective, but they are obviously not the same as the number of deaths due to Covid (that’s because the number of deaths from other causes changed too — e.g. in many countries traffic deaths and suicides declined.)
If you live on $30 a day you are part of the richest 15%.
The majority of the world is very poor: the poorer half of the world, almost 4 billion people, live on less than $6.70 a day.
A key insight from the inequality research is that a person's income is largely determined by *where* in the world they are.
Those who happen to be born into a productive, industrialized economy have much higher incomes than those who don't happen to be born in such an economy.