I did sign this statement on AI extinction risk. My thoughts below🧵

safe.ai/statement-on-a…
When I first received word of the statement from @DanHendrycks, its wording was different. I replied right away stating that I was not comfortable signing the statement as worded.
My emailed response to him is in the photos attached. I deeply appreciate your ef...[..] potential future x-ris...
After a couple more email exchanges, Dan changed the lead-in text to the one you see at the top of the statement now.

While I am of the mind that some of the other risks from AI are deeply crucial, I do have genuine and deep concern for a possible human extinction outcome.
This concern led me to publish in the AI safety realm, in parallel to and related to my work in AI fairness. In fact, just last summer, I coauthored a paper exploring extinction scenarios that do not presuppose any AI capabilities beyond those that exist today.
In this paper, coauthored with Ben Bucknall and published in @AIESConf 2022, we explored pathways for AI to increase other, non-AI, sources of extinction risk. You know, boring things like nuclear war and climate change. That could kill everyone.

dl.acm.org/doi/10.1145/35…
@AIESConf At the time of publication, we did include "Unaligned AGI" as a possible extinction / existential risk. I was unaware at the time of publication of the eugenics and racist origin of the term "AGI," and reject that term today. Nevertheless, the paper's main thrust stands.
@AIESConf Contrary to many other people in the field, especially those on the supposed safety-vs-ethics debate, I also do not believe there is an inherent contradiction between the AI alignment problem and ethical issues in AI. In fact, I have argued, and will continue to argue, that...
AI alignment is hard, because aligning humans is hard. I've learned this through over twenty years in the field, mainly on controversy and disinfo, many of them spent working alongside and I'm collaboration with experts in social science, medicine, and other (non-CS) field.
By extension, AI ethics concerns and issues *are* already a form of misaligned AI. Nowhere is that more obvious than in my own research area: mis- and disinformation online and in social media. We've had misaligned AI for over a decade now. It's called Facebook (and Co).
We cannot seriously hope to align AI to all of humanity as such, bc humanity is full of messy humans with contradictory, conflicting goals and needs, aka, unaligned. Put differently: show me how to solve for world peace, and I will show you how to solve the AI alignment problem.
We can and should include this in our models. Any models that fail to capture how messy humanity is, will fail in truly capturing the alignment problem. I coauthored a paper on this topic with my PhD student @AidanKierans and collaborator @HananelHazan:
openreview.net/forum?id=vtf9c…
@AidanKierans @HananelHazan If anything, the debate around AI safety and xrisks demonstrates firsthand the importance of my own PhD work, namely, how to model and understand #controversy. This is not a black-and-white topic. I disagree with many who think AI poses an extinction risk on the risk specifics;
I vehemently disagree with those calling for violence of any kind in the name of safeguarding humanity from AI doom; but I also disagree with those dismissing these risks as irrelevant or unimportant.
Humanity needs us to get this right: to build AI that is ethical, AND safe.
Rather than positing two opposing camps, let's set aside the so called narcissism-of-small-differences. It's time to put ethics and safety at the forefront of every AI effort, and it's time to regulate tech bros rather than hope they regulate themselves (that never works). /fin

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Shiri Dori-Hacohen✡️♿️🧠 (🚫🥄s)

Shiri Dori-Hacohen✡️♿️🧠 (🚫🥄s) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ShirKi

Jun 1
Yesterday I heard a podcast with @KCDavisSays on ableism and the awesome guest (name escapes me atm) said: disabled people live in a world that wants them dead. It took some time to sink in. The more I think of it, the more I see how it applies.
@KCDavisSays As a financially well-off, white-passing, able-passing woman, I have been comparatively sheltered from this knowledge most of my life. Nevertheless, instinctively I knew this to be true since I was a child.
It is not a theoretical possibility. As a preteen & teenager, I was bullied for my skin disease over the course of 3+ years. The climax of this bullying was being stabbed by a classmate inside my school. My art teacher did not report this. Later that day, I was the one berated.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(