David G. Rand Profile picture
Mar 17, 2021 25 tweets 23 min read
🚨Out now in Nature!🚨
A fundamentally new way of fighting misinfo online:

Surveys+field exp w >5k Twitter users show that gently nudging users to think about accuracy increases quality of news shared- bc most users dont share misinfo on purpose
nature.com/articles/s4158…

1/ ImageImage
Why do people share misinfo? Are they just confused and can't tell whats true?

Probably not!

When asked about accuracy of news, subjects rated true posts much higher than false. But when asked if theyd *share* online, veracity had little impact-instead was mostly about politics Image
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?

Probably not!

Participants overwhelmingly say that accuracy is very important when deciding what to share Image
We argue the answer is *inattention*: accuracy motives are often overshadowed bc social media focuses attention on other factors, eg desire to attract/please followers

This lines up w past finding that more intuitive Twitter users share lower quality news
We test these competing views by shifting attention towards accuracy in 4 exps (total N=3485) w MTurkers & ~representative sample. If people don’t care much about accuracy, this should have no effect. But if problem is inattention, this should make sharing more discerning.
In one exp, Treatment participants rate accuracy of every news post before indicating how likely they'd be to share it. In Control they just indicate sharing intentions

Treatment reduces sharing of false news by 50%! Most of remaining sharing of false news explained by confusion Image
How about a light-weight prompt?

Treatment=subjects rate accuracy of 1 nonpolitical headline at start of study, subtly priming concept of accuracy

Significantly increases quality of subsequent sharing intentions (reduces sharing of false but not true news) relative to control Image
Next, we test our intervention "in the wild" on Twitter. We build up follower-base of users who retweet Breitbart or Infowars. We then send N=5379 users a DM asking them to judge the accuracy of a nonpolitical headline (w DM date randomly assigned to allow causal inference) Image
We quantify quality of news tweeted using fact-checker trust ratings of 60 news sites (pnas.org/content/116/7/…)- at baseline, our users share links to quite low-quality sites

We assess intervention by comparing links in 24 hrs after receiving DM to links from users not yet DMed Image
We find increase in quality of news retweeted after receiving accuracy-prompt DM! 4.8% increase in avg quality, 9.0% increase in summed quality, 3x increase in discernment. Fraction of RTs to DailyCaller/Breitbart 🡳, to NYTimes 🡱

Sig effect in >80% of 192 model specifications Image
Agent-based simulations show how this positive impact can be amplified by network effects. If I dont RT, my followers dont see it and wont RT, so none of their followers will see it etc. Plus, effect sizes observed in our exp could certainly be increased through optimization Image
We also formalize our inattention account using utility theory. Due to attention constraints, agents can only attend to a subset of terms in their utility fn. So even if you have a strong pref for accuracy, accuracy wont impact sharing choice when attention is directed elsewhere! Image
Mechanism?

Fitting model to the experimental data shows avg participant cares about accuracy as much or more than partisanship (confirming survey results)- but attention is often directed away from accuracy

Plus, treatment specifically reduces sharing of more implausible news Image
These studies help us see past the illusion that everyday citizens on the other side must be either stupid or evil- instead, we are often simply distracted from accuracy when online. Another implication of our results is that widely-RTed claims are not necessarily widely BELIEVED
Our treatment could be easily implemented by platforms, eg periodically asking users to rate the accuracy of random posts. This primes accuracy (+generates useful crowd ratings to identify misinformation )

Scalable+doesnt make platforms arbiters of truth!
Here we focused on political news, but in follow-up studies we showed that the results generalize to COVID-19 misinformation as well (eg in this paper that we frantically pulled together in the first few days of the pandemic)
We hope that tech companies will investigate how they can leverage accuracy prompts to improve the quality of the news people share online

To that end, we're really excited about an ongoing collaboration we have with researchers at @Google's @Jigsaw -see psyarxiv.com/sjfbn
We were also really excited to see @tiktok_us, in collaboration with @IrrationalLabs, develop assess and implement an intervention based in part on our accuracy-prompt work

Hoping that @jack @Twitter @Facebook and others will be similarly interested

This study is the latest in our research group's efforts to understand why people believe and share misinformation, and what can be done to combat it. For a full list of our papers, with links to PDFs and tweet threads, see docs.google.com/document/d/1k2…
Finally, if you made it this far into the thread and want to know how this work connects to broader psychological and cognitive science theory, check out this recent review "The Psychology of Fake News" that @GordPennycook and I published in @TrendsCognSci authors.elsevier.com/sd/article/S13…
I'm extremely excited about this project, which took years & was led by @GordPennycook @_ziv_e @MohsenMosleh w invaluable input from coauthors @AaArechar @deaneckles

Please let us know your comments, critiques, suggestions etc. Thanks!!

Ungated PDF: psyarxiv.com/3n9u8
I also wanted to share this @sciam piece that @GordPennycook and I wrote summarizing the paper and related work scientificamerican.com/article/most-p…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David G. Rand

David G. Rand Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DG_Rand

Apr 14
🚨New WP🚨
Many people - from Trump to @elonmusk - have accused Twitter of anti-conservative bias

Is this accusation accurate?

We test for evidence of such a bias empirically - and turns out it's more complicated than you might think...

psyarxiv.com/ay9q5
1/ Image
The root of the challenge when inferring political bias is that Republicans/conservatives are substantially more likely to share misinformation/fake news, as shown eg by @andyguess @j_a_tucker @grinbergnir @davidlazer et al science.org/doi/10.1126/sc… science.org/doi/abs/10.112…
and as we show in a large national survey, there is bi-partisan support for platforms taking action to reduce misinformation - this is true both for misinfo in general, and for a specific instance of misinfo (Qanon conspiracy theories) Image
Read 11 tweets
Feb 16
🚨WP:Examining psychology of misinformation around the globe🚨
Across 16 countries N=34k
➤Strong regularities in cognitive, social & ideological predictors of misinfo belief
➤Broad intervention efficacy (accuracy prompts, literacy tips, crowdsourcing)
psyarxiv.com/a9frz
1/ ImageImageImage
A lot has been learned about psychology of misinformation/fake news, and what interventions may work - for overview, see @GordPennycook and my TICS review below:


BUT almost all of this work has been focused on the west- and misinfo is a GLOBAL problem!
To explore the psych of misinformation through a global lens, our large team (led by @AaArechar) recruited 34k social media users from 16 countries, matched to national dists on age and gender. They rated 10 true and 10 false COVID headlines, and were randomized to 4 conditions Image
Read 18 tweets
Sep 13, 2021
🚩Working paper🚩
DIGITAL LITERACY & SUSCEPTIBILITY TO FAKE NEWS

Lots of assumptions-but little data-out there on link b/w digital literacy & fake news

We find 2 diff digital lit measures predict ability to tell true vs false-but NOT sharing intent psyarxiv.com/7rb2m
1/ Image
Lack of digital literacy is a favorite explanation in both public & academy for the spread of fake news/misinformation. But there's surprisingly little data investigating this, and results that do exist are mixed. One issue is that dig lit is operationalized in various diff ways Image
We looked at 2 different measures
1) Self-reported familiarity/comfort w internet (from @kmmunger @andyguess building off @eszter osf.io/3ncmk/)
2) Correct answer to q about how FB chooses what news to show, from @rasmus_kleis @risj_oxford 2018 Digital News Report
Read 6 tweets
Sep 1, 2021
🚨Out in @ScienceAdvances🚨
SCALING UP FACT-CHECKING USING THE WISDOM OF CROWDS

How can platforms identify misinfo at scale? We find small groups of laypeople can match professional factcheckers when evaluating URLs flagged for checking by Facebook!

science.org/doi/10.1126/sc…
1/
Fact-checking could reduce misinformation
➤ Platforms can downrank flagged content so fewer users see it
➤ Warnings reduce belief and sharing

⚠️But it doesn't SCALE⚠️
Fact-checkers can't keep up w vast quantity of content posted every day

(FCs also accused of liberal bias)
Here's where the *wisdom of crowds* comes in

Crowd judgments have been shown to perform well in guessing tasks, medical diagnoses, & market predictions

Plus politically balanced crowds cant be accused of bias

BUT can crowds actually do a good job of evaluating news articles?
Read 19 tweets
Nov 1, 2020
New WP for your doomscroll:

➤We follow 842 Twitter users with Dem or Rep bot
➤We find large causal effect of shared partisanship on tie formation: Users ~3x more likely to follow-back a co-partisan

psyarxiv.com/ykh5t/

Led by @_mohsen_m w/ @Cameron_Martel_ @deaneckles

1/
We are more likely to be friends with co-partisans offline & online

But this doesn't show *causal* effect of shared partisanship on tie formation
* Party correlated w many factors that influence tie formation
* Could just be preferential exposure (eg via friend rec algorithm)
So we test causal effect using Twitter field exp

Created bot accounts that strongly or weakly identified as Dem or Rep supporters

Randomly assigned 842 users to be followed by one of our accounts, and examined the prob that they reciprocated and followed our account back

3/
Read 8 tweets
Oct 8, 2020
🚨Working paper alert!🚨
"Scaling up fact-checking using the wisdom of crowds"

We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm

psyarxiv.com/9qdza/ Image
Fact-checking could help fight misinformation online:

➤ Platforms can downrank flagged content so that fewer users see it

➤ Corrections can reduce false beliefs (forget backfires: e.g. link.springer.com/article/10.100… by @thomasjwood @EthanVPorter)

🚨But there is a BIG problem!🚨
Professional fact-checking doesnt SCALE

eg last Jan, FB's US partners factchecked just 200 articles/month!
thehill.com/policy/technol…
Even if ML expands factcheck reach theres desperate need for scalability

FCs also perceived as having liberal bias which creates political issues
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(