Andrew Maynard Profile picture
Nov 27 21 tweets 3 min read Twitter logo Read on Twitter
A thread about AI and risk -- the tl;dr is that AI risk is complex, very poorly understood, in need of much more engagement with risk science experts, and in danger of being dominated by AI experts who are not that expert in risk! (1/21)
Risk is complex. But it’s also widely studied. There are risk experts who are well versed in all areas of the field -- including risks from emerging tech (I’m one of them). But being an expert in AI does not make you an expert in risk (2/21)
The trouble is, many conversations about AI and risk are currently being dominated by AI experts who are sometimes painfully naïve about risk, and how to approach it -- especially on social media (3/21)
This leads to increasingly dogmatic and polarizing statements around AI risk that most risk experts would shy away from, simply because so little is known about how to even formulate AI risk problems (4/21)
As a result, there’s a growing need to go back to risk basics and get a better understanding of what risk is, and how we can collectively begin to make sense of a deeply complex risk landscape around AI (5/21)
As a starting point, risk can be thought of as the probability of harm occurring from an action, process, or situation. This may seem a simple statement, but it needs a lot of unpacking (6/21)
First off, risk is about cause and effect -- the harm is the effect, the actions etc. are the cause. No cause, no risk. (7/21)
When applied to chemical substances, this leads to risk being understood as a function of hazard and exposure, where hazard is the potential to cause harm, and exposure is what transforms hazard into risk (8/21)
And this is where things get tricky with AI risk. There’s still uncertainty around what constitutes hazard, what types of harm we’re looking at, and what “exposure” might mean (9/21)
We don’t even know what the functions transforming hazard and “exposure” into risk might look like (10/21)
And to complicate things further, there are strong arguments to be made for some exposures reducing risk, or harm being defined as *not* developing AI that could support human flourishing (11/21)
And so, even with this simple framing that draws on risk science, AI risk begins to look increasingly complicated. But this is just the start (12/21)
Time also has to be factored into risk -- what are the impacts of chronic versus acute equivalents of exposure when it comes to AI? What about intergenerational impacts (and of course, effective altruism kicks in here) (13/21)
And what exactly does “harm” mean -- are we talking about threats to life, critical infrastructure, the environment, human existence, autonomy, identity, mental health, wellbeing, all of the above? (14/21)
Risk is complex, and AI risk multiply so -- so much so that it’s a brave person indeed who claims to know the answers to managing what cannot even be defined! (15/21)
The good news is that there are ways forward here. But they depend on putting ego aside, recognizing relevant expertise, being willing to listen and learn, and taking transdisciplinary approaches to fiendishly wicked challenges (16/21)
And they will depend on developing new understanding around how to approach and address AI risks -- including accepting that risk is inevitable, that it’s a social construct, and that we collectively need to agree on what is OK and what is not (17/21)
One approach is to think of risk as a threat to what is valuable -- whether materially, societally, or personally (sense of identity for instance). This can include threats to future value (a world that doesn’t benefit from AI e.g.) as well as what we already have (18/21)
This is an approach that allows pragmatic strategies to protect and grow what is valuable -- it’s a much healthier approach to risk than just saying no (19/21)
The bottom line though is that we still don’t even know how to talk about AI and risk, and as a result we need a lot less posturing and a lot more listening, learning, and collaboration (20/21)
For more, it’s worth checking out this article. And thanks for persevering through a long thread!
(21/21)futureofbeinghuman.com/p/everything-y…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Andrew Maynard

Andrew Maynard Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(