WATCH ⏯️ Highlights from last week's evidence session with researchers on the #OnlineSafetyBill 🧵
Witness @LauraEdelson2@nyutandon explains to the Committee how 'edge effects' and user interactions shape content algorithms 📱
1/7
⏯️ Witness @LauraEdelson2 tells Baroness Kidron that without seeing the data behind it, research from social media platforms ‘is not science’ 🔎
2/7
⏯️ Chair @DamianCollins: "the more vulnerable you are, the more likely you are to see harmful content [...] Would that be a correct assessment?"
Witness @LauraEdelson2@nyutandon: "Frankly yes, this gets to the heart of interest-based content promotion"
3/7
⏯️ Witness @gchaslot@aiTransparency: "Studying the @YouTube algorithm, it doesn't try to find the perfect content for you, it tries to find the rabbit-hole that's the closest to your interests" 🐇🕳️
4/7
⏯️ @Q66Suzi asks "How long will it take to take content down?"
Witness @gchaslot: "It doesn't matter how long it stays on the platform, what matters is how much the algorithm is amplifying"
5/7
⏯️ In response to Lord Gilbert of Panteg @Stephen17262836, witness Renée DiResta @noUpside@stanfordio explains why 'reduce' is a better tool than 'remove' for moderation of potentially harmful content.
6/7
⏯️ Witness @LauraEdelson2@nyutandon: "I would strongly encourage you to include advertising in what is under oversight. There just isn't an easy distinction to be made" 💸
7/7
• • •
Missing some Tweet in this thread? You can try to
force a refresh
⏯️ 'The system they've built is one that is perfect for the dissemination of misinformation', @Imi_Ahmed@CCDHate tells Chair @DamianCollins
⏯️ The 'Disinformation Dozen' super-spreaders still have 7.9 million followers according to @CCDHate research, @Imi_Ahmed tells @DarrenPJones
In response to Baroness Kidron, @Imi_Ahmed says platforms have been told - by civil society, by governments, by their own employees - about online harms, yet have done little about them, citing @sheeraf and @ceciliakang's new book, 'An Ugly Truth' 📘
.@Imi_Ahmed tells the #OnlineSafetyBill Committee that @CCDHate have identified 'the Disinformation Dozen', responsible for almost 2/3rds of anti-vaccine content circulating on social media platforms. He says he is concerned that disinformation is not named as a harm in the Bill
.@Imi_Ahmed says that if platforms were transparent about how they enforce their own rules, about their algorithms, and their business models, solutions to online harms would become clear. Independent inspection is needed
@MrJohnNicolson asks: What are the motivations of anti-vaxxers? @Imi_Ahmed says this is like any other conspiracy theory: making people distrust authorities. @CCDHate research says anti-vaxx networks reach 60 million online users