How to resolve content moderation dilemmas between free speech and harmful misinformation? Thrilled to share our new article just out in PNAS
With @JasonReifler @stefanmherzog @lorenz_spreen @STWorg @MLeiser and Ralph Hertwig @arc_mpib @mpib_berlin 1/11
pnas.org/doi/10.1073/pn…
Content moderation of online speech is a moral minefield, especially when 2 key values come into conflict: upholding freedom of expression and preventing harm caused by misinformation. In our study, we examined how the U.S. public would approach such difficult trade-offs. 2/11
In a conjoint survey experiment, U.S. respondents indicated whether they would remove problematic social media posts on 4 misinformation topics and whether they would take punitive action against the accounts. 3/11
Our hypothetical scenarios included misinformation topics where active polices have already been implemented by social media platforms: politics (“election denial”), health (“anti-vaccination”), history (“Holocaust denial”), and the environment (“climate change denial”). 4/11
The conjoint design allowed us to systematically vary factors that could influence moral judgments. These factors included: characteristics of the account and of the shared content; whether this was a repeated offense; and the consequences of sharing the misinformation. 5/11
The majority of respondents chose to take some action to prevent the spread of misinformation. On average, 66% said they would remove the offending posts, and 78% would take some action against the account (of which 33% opted to “issue a warning”). 6/11
Content moderation of harmful misinformation was a partisan issue: Across all four scenarios, Republicans were consistently less willing than Democrats or independents to remove posts or penalize the accounts that posted them. 7/11
What were key factors that affected people’s decisions to quash harmful misinformation? The topic of misinformation, the severity of the consequences, and whether it was a repeated offense had the strongest impact on decisions to remove posts and suspend accounts. 8/11
Characteristics of the account itself—the person behind the account, their partisanship, and number of followers—had little to no effect on respondents’ decisions. Respondents did not penalize political out-group accounts more than in-group accounts. 9/11
Our results also show that, despite partisan differences, there is ground for agreement in approaching these difficult dilemmas. We hope that our study can contribute to the process of establishing transparent and consistent rules for content moderation. 10/11
The paper is published with #openaccess. Many thanks to my wonderful co-authors, insightful reviewers and the editor, our scientific editor Deb Ain, RA Spela Vrtovec, and our PR team🙏. For my earlier thread on this study see here 11/11 End🧵

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Anastasia Kozyreva

Anastasia Kozyreva Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AnaSKozyreva

Jan 5
Happy to introduce our new Toolbox of Interventions Against Online Misinformation and Manipulation. It is an expert-curated research resource of 10 types of evidence-supported interventions. 🧰
Interactive website …terventionstoolbox.mpib-berlin.mpg.de
Preprint psyarxiv.com/x8ejt
🧵1/8
The toolbox delivers a conceptual overview of the breadth of interventions, incl. their target, scalability, & limitations. We categorize these interventions as nudges, which target behaviors; boosts, which target competences; or refutation strategies, which target beliefs. 2/8
The evidence part of the toolbox summarizes the evidence behind the 10 interventions, and is based on 42 scientific papers recommended for inclusion by the experts. It is available online as a searchable and expandable table at
…terventionstoolbox.mpib-berlin.mpg.de/table_evidence…
3/8
Read 8 tweets
Aug 24, 2020
Happy to share our new preprint with @stefanmherzog @philipplenz6 @STWorg and Ralph Hertwig "Public attitudes towards algorithmic personalization and use of personal data online: Evidence from Germany, Great Britain, and the US" psyarxiv.com/3q4mg 1/N
Billions of people rely on data-driven algorithms online--often without noticing--when scrolling through news feeds, following video suggestions or using search engines. But how acceptable do they find the use of their personal information for the purpose of personalization? 2/N
In an online survey in Germany, GB, and the US (N=1,000 each) we inquired into people's attitudes towards three key components of algorithmic personalization: personalized services, personal data, and personal information that can be provided by users or inferred from data. 3/N
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(