, 3 tweets, 1 min read Read on Twitter
There is a gap in current the current Silicon Valley AI safety research ecosystem. Most organizations are studying artificial general intelligence, but much dumber systems will come online long before we have AGI, and may do a lot of damage. 1/3
Examples include pricing AIs that learn to collude (already demonstrated in the lab), biased criminal justice risk assessment, and algorithmic trading systems that end up funding arms dealers and environmental damage. These systems need to be built to encode social values. 2/3
The work of labs like @OpenAI and @MIRIBerkeley is important but decades away from use. We need concrete and computable social metrics for the near term. We need engineering best practices -- today -- for building systems that protect environment, fairness, inequality, etc. 3/3
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to jonathanstray
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!