Now keynote at #BYSTANDER22 by @DG_Rand on the problem of misinformation and how polarization might solve the problem.

The key question is: How do we fight misinformation at scale? 1/19
Currently, platforms are using technical solutions such as machine learning etc. But there are limits to this solution. These limits often entails that human fact-checkers are brought in. This *does* work. Warning labels limits false news. 2/19
The problem with fact-checking is that it doesn't scale. How can we deal with misinformation at scale?

The solution is to turn towards the wisdom of the crowds (i.e., the finding that aggregations of average people's opinions are often very accurate). 3/19
Wisdom of the crowds often works.

But in the context of polarized topics such as politics, one may be concerned with the wisdom of the crowds. People have partisan biases etc. that may destroy the effect. 4/19
Does it? A 2019-study examined Dems & Reps trust in news sources. There are partisan asymmetries (e.g., Fox News) but both groups reject hyper-partisan & fake news sites. Average ratings are accurate of trustworthiness: Corr of .90 with fact-checkers' ratings. 5/19
But if this is used in the wild, will people adjust their responses to promote their political agenda? A follow-up study informed people that results will be shared with Facebook and used to tweak the algorithm. The results overall didn't change. 6/19
Why? Because most people don't care about politics.

Can this also be applied at the level of articles and not just sources. Here, it is key to choose ecologically valid articles (i.e., reflecting actually shared articles). 7/19
In a 2021-study 207 headlines selected for fact-checking by Facebook were checked by 3 professionals. Does their assessment reflect the assessments of the crowds? The benchmark is the degree of agreement between the fact-checkers. 15 lay people can indeed match this. 8/19
Does this work cross-culturally? 2022-paper examines this across 16 countries. In almost all countries 20 people per headline provides 90 % accuracy in detecting false news. 9/19
Yet, if this really should be usable at scale, people need to be able to choose what to rate themselves. There is too much information out there to have a central selection process. This again entails the risk of partisan cheer-leading. 10/19
What could motivate people to flag content? They could care about truth. But they could also care about partisanship. This may drive people to flag accurate but opposing content. 11/19
However, the value put on truth and partisanship are continuums. In this two-dimensional space, most people will care about both truth and partisanship and, hence, will be motivated to specifically flag false information that opposes them. 12/19
Here, partisanship actual gives the motivation to invest in fact-checking. And most people are not motivated enough by politics to wrongly flag opposing information that is actually true. 13/19
Twitter has implemented a crowd-sourced fact-checking unit of 3000 participants, called the Birdwatch. What drives the behavior of the Birdwatchers? 14/19
73 % of Birdwatchers actually never flagged anything, showing that motivation needs to be high for this.

Analyses of flagged tweets and the Birdwatchers show that the 60 % of flagging that does happens is indeed of the false-opposing type. 15/19
Only 10 % of true-opposing information. 16/19
Despite this partisan bias, there was high accuracy because of the nature of it. 80 % of flagged content was also flagged by a fact-checker.

Partisan motivations drive people to contribute with helpful flagging. 17/19
Still, some potential problems: 1) We don't know what they are missing. 2) There might be more problems, if the crowds are not equally divided in terms of ideology. 3) Non-polarized false information may not be targeted much. 18/19
So, in conclusion: Wisdom of crowds can help platforms to identify misinfo in scalable ways. And partisan motivation solves the public goods problem of why people should care about helping society in this way. 19/19

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bang Petersen

Michael Bang Petersen Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @M_B_Petersen

Jun 9
At #BYSTANDER22 @jrpsau presents our research on how an intervention by @SSTSundhed during the pandemic decreased false news sharing by boosting people's competence in spotting "fake news". 1/5
One intervention often recommended is "accuracy nudges". These assume that people have an intrinsic motivation to be accurate but leave people on their own re: how to spot "fake news".

In risk communication, however, the recommendation is always to give *actionable* advice. 2/5
According to Protection Motivation Theory, actionable advice boosts feelings of competence and efficacy that drives behavior. 3/5
Read 5 tweets
Jun 9
At #BYSTANDER22 @zeaszebeni presents on the profiles of "fake news" believers in Hungary.

Many different factors shape people's beliefs in disinformation. But most research is variable-centered. Here, a *person-centered* approach is used. 1/7
A person-centered approach focuses on whether different types of disinfo speaks to different people. This approach is here used in the polarized Hungarian context, where the term "fake news" is often used to delegitimize the other side. 2/7
295 participants were recruited. They rated the accuracy of news stories (true and false). Multiple factors related to trust were measured and then cluster analysis was applied. 3/7
Read 7 tweets
Jun 9
At #BYSTANDER22 @Sacha_Altay presents on how effective fact-checking, nudges & literacy is against misinformation.

Many interventions are being tested & have been shown to be effective, but short-lived, in the lab. BUT they do not reflect our info eco-systems. 1/7
News consumption is low. Unreliable news may be 5 % of their news diet and even less of media diet. People spend more time on porn (!) than news. Political news consumption is even smaller. 2/7
People's false beliefs does not reflect that they are misinformed but because they are uninformed. Enhancing engagement with reliable news is more important than fighting misinfo. 3/7
Read 7 tweets
Jun 9
At #BYSTANDER22 @aqsa_farooq13 presents work on how young people react to peers who shares misinfo.

Young people are massive users of social media but abilities to detect misinfo is limited. Much misinfo research focus on the content of misinfo. But what about the source? 1/7
Development involve multiple factors that can shape kids' reactions to misinfo. Social group membership influence young people's acceptance of information. Info from ingroups is strongly preferred. Rather than seeking accuracy, children may prioritize loyalty. 2/7
Children uses three domains of knowledge in reasoning (cf. Social Domain Theory): Moral, social and personal. Depending on developmental stage, children will prioritize different domains. 3/7
Read 7 tweets
Jun 9
At #Bystander22 @StefSelmer presents on our research on extreme misogyny including violent extremism - and its potential relationship to individual differences in sociosexuality. 1/7
Extreme misogyny and violent extremism is often seen as an extreme face of (parts of) the incel community. Why could there be this link? 2/7
Potentially because of "sociosexual mismatches": That violence emerges from frustrated sexual desires - and extreme misogyny is a reflection of a "revenge strategy" against the women they desire. 3/7
Read 7 tweets
Jun 9
At #BYSTANDER22 @Linn_Sandberg presents on images of muslims in online & legacy news media.

Is attention to muslims greater online than in legacy media? Do representations differ? And are online representations more hostile (supporting the "online hostility thesis") 1/5
The data is online and media documents from 8 countries: Sweden, Norway, Denmark, UK, France, Netherlands, Germany & Spain. And Word2vec models are used to analyze text and extract similarity scores between "muslim"/"islam" and neighboring terms. 2/5
In most countries (not Denmark) there is more attention to muslims/islam online than in legacy media. In some countries (but not all), more negative words are close to muslim/islam. In online space, however, there is a stronger connection to eg "extremist" and "terrorist". 3/5
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(