🚨Working paper alert!🚨
"Scaling up fact-checking using the wisdom of crowds"

We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm

psyarxiv.com/9qdza/ Image
Fact-checking could help fight misinformation online:

➤ Platforms can downrank flagged content so that fewer users see it

➤ Corrections can reduce false beliefs (forget backfires: e.g. link.springer.com/article/10.100… by @thomasjwood @EthanVPorter)

🚨But there is a BIG problem!🚨
Professional fact-checking doesnt SCALE

eg last Jan, FB's US partners factchecked just 200 articles/month!
thehill.com/policy/technol…
Even if ML expands factcheck reach theres desperate need for scalability

FCs also perceived as having liberal bias which creates political issues
Here's where the *wisdom of crowds* comes in

Crowd judgments have been shown to perform well in guessing tasks, medical diagnoses, and market predictions

Plus politically balanced crowds cant be accused of bias

BUT can crowds actually do a good job of evaluating news articles?
We set out to answer this question

It was critical to use *representative* articles- otherwise unclear if findings would generalize

So we partnered with FB Community Review team, and got 207 URLs flagged for fact-checking by an internal FB algorithm

axios.com/facebook-fact-…
Next we had 3 professional factcheckers research each article & rate its accuracy

First surprise: They disagreed more than you might expect!

The avg correlation b/t the fact-checkers' ratings was .62

On half the articles, 1 FC disagreed w other 2; on other half, all 3 agreed Image
Then we recruited N=1,128 laypeople from MTurk to rate the same articles (20/turker)

For scalability, they just read & rated each headline+lede, not full article

Half shown URL domain, other half no source info

Our Q: How well do layperson ratings predict factchecker ratings? Image
We created politically-balanced crowds & correlated their avg ratings with avg factchecker ratings

The crowd does quite well:

With as few as 10 laypeople, crowd is as correlated with average fact-checker rating as the fact-checkers’ ratings are correlated with each other!! Image
Providing article publisher domain improved crowd performance (a bit)

Consistent w suggestion that source info only helps when mismatch exists between headline plausibility & source: these headlines were mostly implausible, but some had trusted sources

misinforeview.hks.harvard.edu/article/emphas…
Next, we used laypeople ratings to predict the modal categorical rating fact-checkers gave to each headline (1 = True, 0 = Not True)

Overall AUC=.86
AUC>0.9 for articles where factcheckers were unanimous
AUC>0.75 for articles where one FC disagreed w other 2

Pretty damn good! Image
Finally we asked if some crowds did better than others

Answer:Yes & no

Crowds that were 1)Dem 2)high CRT 3)high political knowledge did better than 1)Rep 2)low CRT 3)low PK counterparts- but DIDNT outperform overall crowd!

Crowd neednt be all experts to match expert judgment Image
Caveats:
1) Individuals still fell for misinfo- but *crowds* did well
2) Need to protect against coordinated attacks (eg randomly poll users, not reddit-like)
3) Not representative sample- but point is that some laypeople can do well (FB could hire turkers!)
4) This was pre-COVID
Overall, we think that crowdsourcing is a really promising avenue for platforms trying to scale their fact-checking program!

Led by @_JenAllen @AaArechar w @GordPennycook

Thanks to FB Community Review team and others who gave comments

Would love to hear your thoughts too!🎉
PS: There is a lot of concern about crowdsourcing being gameable (eg via coordinated attacks). Check out this paper by @_ziv_e @GordPennycook and myself that discusses how to prevent this dl.acm.org/doi/abs/10.114…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David G. Rand

David G. Rand Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DG_Rand

17 Mar
🚨Out now in Nature!🚨
A fundamentally new way of fighting misinfo online:

Surveys+field exp w >5k Twitter users show that gently nudging users to think about accuracy increases quality of news shared- bc most users dont share misinfo on purpose
nature.com/articles/s4158…

1/ ImageImage
Why do people share misinfo? Are they just confused and can't tell whats true?

Probably not!

When asked about accuracy of news, subjects rated true posts much higher than false. But when asked if theyd *share* online, veracity had little impact-instead was mostly about politics Image
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?

Probably not!

Participants overwhelmingly say that accuracy is very important when deciding what to share Image
Read 22 tweets
1 Nov 20
New WP for your doomscroll:

➤We follow 842 Twitter users with Dem or Rep bot
➤We find large causal effect of shared partisanship on tie formation: Users ~3x more likely to follow-back a co-partisan

psyarxiv.com/ykh5t/

Led by @_mohsen_m w/ @Cameron_Martel_ @deaneckles

1/
We are more likely to be friends with co-partisans offline & online

But this doesn't show *causal* effect of shared partisanship on tie formation
* Party correlated w many factors that influence tie formation
* Could just be preferential exposure (eg via friend rec algorithm)
So we test causal effect using Twitter field exp

Created bot accounts that strongly or weakly identified as Dem or Rep supporters

Randomly assigned 842 users to be followed by one of our accounts, and examined the prob that they reciprocated and followed our account back

3/
Read 8 tweets
24 Mar 20
Today @GordPennycook & I wrote a @nytimes op ed

"The Right Way to Fix Fake News"
nytimes.com/2020/03/24/opi…

tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong

In this thread I unpack the many studies behind our op ed

1/
Platforms are under pressure to do something about misinformation. Would be simple to rapidly implement interventions that sound like they would be effective.

But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!

2/
For example, its intuitive that emphasizing headline's publisher (ie source) should help people tell true vs false Low quality publisher? Question the headline.

But in a series of experiments, we found publisher info to be ineffective!

Details:

3/
Read 14 tweets
17 Mar 20
🚨New working paper!🚨

"Fighting COVID-19 misinformation on social media:
Experimental evidence for a scalable accuracy nudge intervention"

We test if an intervention we developed for political fake news works for #COVID19- seems like YES!

PDF: psyarxiv.com/uhbk9/

1/
Previously we found people share political misinfo b/c social media distracts them from accuracy- NOT b/c they cant tell true v false, NOT b/c they dont care about accuracy

So nudging them to think about accuracy improved quality of news they shared!


2/
Like everyone else we're losing sleep over #COVID19

To try to feel (slightly) useful, we decided to see how similar COVID-19 misinfo was to political misinfo from a cog psych perspective- and if the accuracy nudge we'd come up with might help fight COVID-19 misinfo online

3/
Read 15 tweets
17 Nov 19
🚨Working paper alert!🚨 "Understanding and reducing the spread of misinformation online"

We introduce a behavioral intervention (accuracy salience) & show in surveys+field exp w >5k Twitter users that it increases quality of news sharing

psyarxiv.com/3n9u8

1/
We first ask why people share misinformation. It is because they simply can't assess the accuracy of information?

Probably not!

When asked about accuracy, MTurkers rate true headlines much higher than false. But when asked if theyd share online, veracity has little impact
2/
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?

Probably not!

Those same Turkers overwhelmingly say that its important to only share accurate information.
3/
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!