, 3 tweets, 2 min read
My Authors
Read all threads
This is related to a question @LesleyRStahl asked CEO of Google YouTube on @60Minutes last night. When the social media platform is actually recommending harmful content, does that change liability? In this case, Instagram reported to be recommending young girls to pedophiles.
There are many examples where microtargeting plus harmful content/context are toxic. Illegal drugs ads microtargeted to addict, political disinfo microtargeted to vulnerable. Put any of these algorithmic recommendations into offline world and it wouldn’t be tolerated.
Back to the first story, imagine a service showing up at a pedophile’s front door with a photo album of young girls. And then imagine advertisers supporting that service. They very much are - in the tens of billions. @DamianCollins
Missing some Tweet in this thread? You can try to force a refresh.

Enjoying this thread?

Keep Current with Jason Kint

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!