Max Roser Profile picture
Apr 27 21 tweets 4 min read Twitter logo Read on Twitter
Why does powerful Artificial Intelligence pose a risk that could make all of our lives much, much worse in the coming years?

There are many good texts on question, but they are often long.

🧵I'm trying to summarize the fundamental problem in a brief Twitter thread.
The fundamental reason is that there is nothing more dangerous than intelligence used for destructive purposes.
Some technologies are incredibly destructive. Nuclear bombs for example.

If used, they would kill billions of us.
ourworldindata.org/nuclear-weapon…

But in the bigger picture nuclear weapons are a downstream consequence of intelligence.
No intelligence, no nuclear weapons
Of course, intelligence also makes a lot of the best things possible.

We used our intelligence to build the homes we live in, create art, and eradicate diseases.
But to see the risk of AI, we have to see that there is nothing more dangerous than intelligence used for destructive purposes.
This means the following question is extremely important for the future of all of our lives: What goals are powerful, intelligent agents pursuing?
(That has always been the case. Throughout history the worst problem you could have was an intelligent opponent intent to harm you.
But in history these opponents were intelligent individuals or the collective intelligence of a society.)
The question today is, what can we do to avoid a situation in which a powerful artificial intelligence is used for destructive purposes?
There are fundamentally two bad situations we need to avoid:
1) The first one is obvious. Someone – perhaps an authoritarian state, perhaps a reckless individual – has control over very powerful artificial intelligence and uses the technology for bad purposes.
As soon as a malicious actor has control over powerful AI, they can use it to develop everything that this intelligence can develop — from weapons to synthetic pathogens.

And an AI system's power to monitor huge amounts of data makes it suitable for large-scale surveillance.
2) The other situation is less obvious. That’s the so-called alignment problem of AI.

Here the concern is that *nobody* would be able to control a powerful AI system.
The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to pursue destructive goals.

The risk is that we try to instruct the AI to pursue some specific goal – *even a very worthwhile one* – and in the pursuit of that goal it ends up harming humans.
The alignment problem is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.
To summarize: I believe we are right now in a bad situation.

The problems above have been known for a very long time – for decades — but all we’ve done is to speed up the development of more and more powerful AI and we’ve done close to nothing to make sure that we stay safe.
I don’t believe we will definitely all die, but I believe there is a chance.
And I think it is a huge failure of us today to not see this danger:
We are leaving it to a small group of entrepeneurs to decide how this technology is changing our lives.

This is despite the fact that, as @leopoldasch has pointed out, "Nobody’s on the ball on AGI alignment".
forourposterity.com/nobodys-on-the…
On @OurWorldInData we've done a lot of work on artificial intelligence, because we believe the immense risks and opportunities need to be of central interest to people across our *entire society*.

ourworldindata.org/artificial-int…
Thank you for reading.

If you want to read more, I wrote this essay last year about the same topic. ourworldindata.org/ai-impact
The two situations above are those that I believe would make our lives much, much worse.

But they are of course not the only possible risks of AI. Misinformation, biases, rapid changes to the labour market, and many other consequences also require much more attention and work.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Max Roser

Max Roser Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @MaxCRoser

Jun 1, 2022
Research suggests that many children – especially in the world’s poorest countries – learn only very little in school.
What can we do to improve this?

My new post is out ourworldindata.org/better-learning

👇 I'll write a short thread below.
For many children schools do not live up to their promise.

This is a problem in rich countries.
But it tends to be a much larger problem in poorer countries, as this data from @jpazvd and colleagues shows.
Until recently this did not receive the attention it deserves. Also because the data was not there.

The data focused on the quantity of education – how long children are in school – but not the quality of education.
Read 16 tweets
Mar 15, 2022
My new post is out: ourworldindata.org/longtermism
It is about humanity's past and future.

I don't know how to summarize this post in a thread. But I can share the two visuals I made for it. 👇
• Demographers estimate that 117 billion humans have been born.
• Almost 8 billion are alive now.

To bring these large numbers into perspective I made this visualization.

A giant hourglass. But instead of measuring the passage of time, it measures the passage of people. /2 Image
How does our past and present compare with the future?
We don't know. But what I learned from writing this post is that our future is potentially very, very big.

I try to convey this here. But even this visualization shows only a small fraction of humanity's potential future.
/3 Image
Read 4 tweets
Mar 5, 2022
I think the visualization teams in many media outlets need to rethink how they map the war.

Most do maps like the one on the left. But that is not reflecting their own reporting. There aren't entire *areas* that are 'under control'.

On the right is a more accurate alternative.
It is not just the BBC that makes such bad maps.

Below are screenshots from The Guardian, The New York Times, and The Economist from just now.

And I am not the first one to make this point. It is a well-known mistake in mapping wars.
The always excellent @kikollan explains in the replies that this is due to a data issue (it often is; good data → good visualizations).

The data for these maps that many rely on comes from the ISW (@TheStudyofWar) and they map it like that.
Read 4 tweets
Dec 28, 2021
Confirmed COVID cases are only a fraction – in many countries a small fraction – of all cases.

That is why it is wrong to report confirmed cases as if they were all cases, as many newspapers do.

For example the Guardian:
Confirmed COVID deaths too are in many countries only a small fraction of all deaths.

That is why it is wrong to report confirmed deaths as if they were all deaths, as many newspapers do.

For example the BBC:
I do believe that journalists – including those at the BBC and The Guardian who make these mistakes – do actually want to report accurately.

But they are spreading misinformation by making these mistakes.
And their readers get a very wrong picture of what is happening.
Read 6 tweets
Dec 27, 2021
The BBC writes again this morning “Nearly 5.4 million people have died with coronavirus worldwide.”

This is not true at all.

It is wrong to report the number of confirmed deaths as if it was the number of all COVID deaths.
The number of all Covid deaths is certainly much, much higher than 5.4 Million.

Especially in poorer countries only a fraction of covid deaths get confirmed as covid deaths.

How much higher the true death toll is is unknown.
But there are relevant estimates.
Estimated excess deaths offer one relevant perspective, but they are obviously not the same as the number of deaths due to Covid (that’s because the number of deaths from other causes changed too — e.g. in many countries traffic deaths and suicides declined.)
Read 5 tweets
Dec 9, 2021
My new post is out: It is about the extremely large global economic inequality of our time.

My text is here
ourworldindata.org/global-economi…

👇 I'll share a thread with some of the points.
This chart shows the extent of global inequality.

The top 1% live on more than $125 per day.

If you live on $30 a day you are part of the richest 15%.

The majority of the world is very poor: the poorer half of the world, almost 4 billion people, live on less than $6.70 a day. Image
A key insight from the inequality research is that a person's income is largely determined by *where* in the world they are.

Those who happen to be born into a productive, industrialized economy have much higher incomes than those who don't happen to be born in such an economy. Image
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(