Barbora Bukovska @article19org says the need for regulation is definitely there, but the balance is not struck. Beyond content moderation, the dominant business model of these companies needs to be addressed.
@RichardAWilson7@UConn describes how a US judgment has found that Section 230 of the Communications Decency Act protects content on platforms, but not content algorithms - so there is space for regulation here
@SilkieCarlo@BigBrotherWatch says the problem lies within microtargeted advertising, reforms should be focused on that and how it changes what people see online
@MrJohnNicolson asks @RichardAWilson7 if, as a result of the #OnlineSafetyBill, there are overzealous take-downs of LGBT+ content? Professor Wilson agrees this should be under constant review.
@SilkieCarlo tells @MrJohnNicolson that both LGBT+ users, and gender-critical feminists, are being censored online, in an erratic way, usually responding to press criticism and public coverage
Lord Black of Brentwood asks if there is a risk of double-regulation for press content carried on social media platforms?
@RichardAWilson7 says no, the #OnlineSafetyBill largely exempts mainstream news publications. His concern is outlets linked to foreign governments
Lord Black asks the witnesses what they think about the powers of the @DCMS Secretary of State.
@RichardAWilson7@UConnLaw says he thinks they are particularly concerning for freedom of speech, and for @Ofcom's independence - the regulator should have more independence
@SilkieCarlo tells @Debbie_Abrahams she supports opening up algorithms on a systems level to transparency, but in the context of the #OnlineSafetyBill, defining what types of content shouldn't be distributed is dangerous
@MatthewdAncona tells Baroness Kidron that there is no immediate precedent in human history to a 'Twitter pile-on'. That may mean 'harmful content' needs to be redefined to account for virality, but the #OnlineSafetyBill doesn't sufficiently do that.
Now joining the Committee to discuss the impact of the #OnlineSafetyBill on freedom of the press are witnesses
Gavin Millar @MatrixChambers thinks the #OnlineSafetyBill should not include illegal content, as platforms won't be able to deal with it.
The Bill should clearly define what legal but harmful content it seeks to address, with a clear process of redress for those harmed
@MattRogerson@Guardian tells @Dean4Watford that we need to understand how platform's content algorithms work, both to understand harms, but also commercial and competition issues
In response to @LordJimKnight asking about the protection of democratic debate in the #OnlineSafetyBill, Peter Wright @DMGTplc says that in light of the death of Sir David Amess, some candidates receiving abuse could receive additional protections online
We are now joined by two last witnesses today to discuss the psychological impact of being online:
@JonHaidt tells Baroness Kidron that the basic premise of Facebook is harmful to young girls, as revealed in the @WSJ#FacebookFiles. We should treat them like regular businesses and remove liability protections until they have professional standards wsj.com/articles/faceb…
"Our motto at @CommonSense is 'sanity not censorship'. It's insanity to leave these decisions up to the like of Mark Zuckerberg" @JimSteyer tells Baroness Kidron
"Facebook's motto was 'Move Fast and Break Things.' They did, the broke our kids and they're breaking our democracy," Professor @JonHaidt@NYUStern tells @Dean4Watford, "they are the British East India Company"
You should invest in digital literacy and digital citizenship from kindergarten onwards, says @JimSteyer. @JonHaidt adds that to break the #SocialDilemma, we need to make companies liable for children being on their platforms @CommonSense@NYUStern
That concludes our evidence session today - thank you for joining us
• • •
Missing some Tweet in this thread? You can try to
force a refresh
⏯️ 'The system they've built is one that is perfect for the dissemination of misinformation', @Imi_Ahmed@CCDHate tells Chair @DamianCollins
⏯️ The 'Disinformation Dozen' super-spreaders still have 7.9 million followers according to @CCDHate research, @Imi_Ahmed tells @DarrenPJones
In response to Baroness Kidron, @Imi_Ahmed says platforms have been told - by civil society, by governments, by their own employees - about online harms, yet have done little about them, citing @sheeraf and @ceciliakang's new book, 'An Ugly Truth' 📘
.@Imi_Ahmed tells the #OnlineSafetyBill Committee that @CCDHate have identified 'the Disinformation Dozen', responsible for almost 2/3rds of anti-vaccine content circulating on social media platforms. He says he is concerned that disinformation is not named as a harm in the Bill
.@Imi_Ahmed says that if platforms were transparent about how they enforce their own rules, about their algorithms, and their business models, solutions to online harms would become clear. Independent inspection is needed
@MrJohnNicolson asks: What are the motivations of anti-vaxxers? @Imi_Ahmed says this is like any other conspiracy theory: making people distrust authorities. @CCDHate research says anti-vaxx networks reach 60 million online users