, 24 tweets, 6 min read Read on Twitter
Thread

My first op-ed in @WIRED: how the AI feedback loops I helped build at YouTube can amplify our worst inclinations, and what to do about it.

wired.com/story/the-toxi…

1/
Earlier this year a YouTuber showed how YouTube's recommendation algorithm was pushing thousands of users towards sexually suggestive videos of children, used by a network of pedophiles.

YouTube bans sexual videos. What happened?

2/
At YouTube, we designed the AI for engagement. Hence, if pedophiles spend more time on YouTube than other users, the job of the AI will become to try to *increase* their numbers.

3/
Despite companies like Nestlé and Disney removing their ads from YouTube, the problem was not completely fixed: last month the @nytimes showed how the recommendation engine was still promoting those videos! 4/

nytimes.com/2019/06/03/wor…
This second time, YouTube reacted more strongly.

Let's take a look at the big picture. 5/
Recommendation systems have been shown by @DeepMindAI to give rise to "filter bubbles" and "echo chambers".

Are these content neutral? 6/

Even without understanding the complex AI, we can guess which filter bubbles will be favored. How? By looking at how engagement metrics create feedback loops. 7/
The feedback loop works like that:
1) The type of content that hyper-engaged users like gets more views
2) Then it gets recommended more, since the AI maximizes engagement
3) Content creators will notice and create more of it
4) People will spend even more time on it

7/
Eventually, hyper-engaged users drive the topics promoted by the AI.

Some of our worst inclinations, such as misinformation, rumors, divisive content generate hyper-engaged users, so they often get *favored* by the AI. 9/
One example from last week:

Justin Amash said "Our politics is in a partisan death spiral". Is this "death spiral" good for engagement? Certainly: partisans are hyper-active users. Hence, they benefit from massive AI amplification. 10/
washingtonpost.com/opinions/justi…
AIs were supposed to solve problems, but they appear to amplify others. What should we do?

11/
Platforms acknowledged some of these problems, and they are taking action. Here's how. 12/
Mark Zuckerberg wrote this post to explain why @Facebook needs to demote "borderline content" 13/

facebook.com/notes/mark-zuc…
YouTube announced in January 2019 that they aim to reduce recommendations of harmful misinformation: 14/

But these are limited to specific types of harmful content, and go against the platform's business interest. Hence, the changes are likely going to be minimal.

15/
When I talked about these problems internally, some Googlers told me "it's not our fault if users click on **** ".

But part of the reason why people click on this content is because they trust @YouTube 16/
The culprit is that users overly trust @Google-@YouTube 17/
Recommendations can be *toxic*: they can gradually harm users, in ways that are difficult to see without access to large-scale data. 18/
Researchers in universities around the world don't have the right data to understand the impact of these AIs on society. 19/
For instance, researchers at the Oxford Internet Institute concluded this week: “Until Google, Facebook [...] share the data being saved on to their servers [...], we will be in the dark about the effects of these products on mental health”
theguardian.com/commentisfree/… 20/
Conclusions

Users:

=> Stop trusting Google/YouTube blindly

Their AI is working in your best interest only if you want to spend as much time as possible on the site. Otherwise, their AIs may work against you, to make you waste time or manipulate you. 21/
Platforms:

=> Be more transparent about what your AI decides
=> Align your "loss function" on what users really want, not pure engagement 22/
Regulators:

=> Create a special legal status for algorithmic curators
=> Demand some level of transparency for recommendations. This will help understand the impact of AI, and boost competition & innovation

IBM advocated for legislation:
23/
Here's the full article for more details:
wired.com/story/the-toxi…

24/
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Guillaume Chaslot
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!