David G. Rand @dgrand.bsky.social Profile picture
Prof @MIT - I've left X, you can find me on BlueSky at @dgrand.bsky.social
3 subscribers
Oct 2 11 tweets 5 min read
🚨Out in Nature!🚨
Many (eg Trump JimJordan @elonmusk) have accused social media of anti-conservative bias
Is this accurate?
We test empirically - and it's more complicated than you might think: conservatives ARE suspended more, but also share more misinfo nature.com/articles/s4158…Image
Image
After 2016, tech cos were under intense pressure to combat misinfo. Now, that pressure has shifted to avoiding claims of political bias, mostly from the right. This is stifling action against misinfo (and unfairly punishing misinfo researchers like Renee DiResta @katestarbird )
Sep 12 16 tweets 6 min read
🚨Out in Science!🚨
Conspiracy beliefs famously resist correction, ya?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20%!
-Lasts over 2mo
-Works on entrenched beliefs
-Tailored AI response rebuts specific evidence offered by believers

1/ science.org/doi/10.1126/sc…

Image
Image
Attempts to debunk conspiracies are often futile, leading many to conclude that psychological needs/motivations blind pple & make them resistant to evidence. But maybe past attempts just didn't deliver sufficiently specific/compelling evidence+arguments?
Apr 3 12 tweets 5 min read
🚨WP🚨
Conspiracy beliefs famously resist correction, right?
WRONG: We show brief convos w GPT4 reduce conspiracy beliefs by ~20pp (d~1)!
🡆Tailored AI evidence rebut specific arguments offered by believers
🡆Effect lasts 2+mo
🡆Works on entrenched beliefs
osf.io/preprints/psya…

Image
Image
Attempts to debunk conspiracies are often futile, leading many to conclude that consp beliefs are driven by needs/motivations & thus resistant to evidence. But maybe past attempts just didn't deliver sufficiently specific/compelling evidence+arguments?
Nov 17, 2023 12 tweets 4 min read
🚨New WP🚨

Intuition favors belief in false claims. But why?

@ROrchinik argues its the result of rationally adaptative intuitions adjusting to low base rate of false claims in the US media environment

Check out Reed's talk at SJDM on 11/18 9:10am!

PDF: osf.io/preprints/psya…
Image Belief in misinfo has been linked to components of the digital media environment that increase reliance on intuitions. But why do our intuitions support belief in falsehoods?

Most theories of belief, eg @DanTGilbert, suggest that information must be accepted before deliberation Image
Aug 21, 2023 16 tweets 5 min read
🚨Out in PoPS🚨
Can crowds help identify misinfo at scale? In this review we show seemingly contradictory findings in the lit are simply due to different analytic approaches. In all data, crowd is highly correlated w experts!
Crowd ratings=useful signals
journals.sagepub.com/eprint/NMKPE6F…
Image Professional fact-checking can help reduce misinfo:
➤Warning labels reduce belief & sharing
➤Platforms can downrank flagged content, reducing views

But - the volume of content posted is almost unlimited & exceeds the capacity of professional FCers. How to ID misinfo at SCALE?
Jun 29, 2023 20 tweets 7 min read
🚨Out in @NatureHumBehav🚨
We examine psychology of misinformation across
16 countries, N=34k
➤Consistent cognitive, social & ideological predictors of misinfo belief
➤Interventions (accuracy prompts, diglit tips, crowdsourcing) all broadly effective

1/ https://t.co/pNagzRcCyHnature.com/articles/s4156…






A lot has been learned about psychology of misinformation/fake news, and what interventions may work. For overview, see @GordPennycook and my TICS review:

BUT almost all of this work has been focused on the west- and misinfo is a GLOBAL problem!
Dec 19, 2022 13 tweets 7 min read
🚨Out in @NatureComms🚨
New measure of Twitter users' exposure to misinfo from *ELITES* using @politifact ratings of the elites a user follows
➤Predicts users' misinfo sharing
➤More extreme Reps = more exposure
Check out your own exposure w/ web app! misinfoexpose.com
1/
Here's an (open access) link to the academic paper: nature.com/articles/s4146…
In this thread I'll unpack what we did and what we found
Oct 25, 2022 17 tweets 9 min read
🚨Accuracy Prompt Meta-Thread🚨
Weve proposed that prompting people to think about accuracy reduces misinfo sharing. But is this effect replicable & robust?
@GordPennycook & I analyzed 20 exps, N=26k
Answer: resounding YES, across many headlines/prompts
nature.com/articles/s4146…
1/ Previously we found a disconnect between what people judge as accurate and what they say they'd share- despite not wanting to share things they realize are false. Why? Largely bc people simply forget to consider accuracy when deciding what to share
Apr 14, 2022 11 tweets 5 min read
🚨New WP🚨
Many people - from Trump to @elonmusk - have accused Twitter of anti-conservative bias

Is this accusation accurate?

We test for evidence of such a bias empirically - and turns out it's more complicated than you might think...

psyarxiv.com/ay9q5
1/ Image The root of the challenge when inferring political bias is that Republicans/conservatives are substantially more likely to share misinformation/fake news, as shown eg by @andyguess @j_a_tucker @grinbergnir @davidlazer et al science.org/doi/10.1126/sc… science.org/doi/abs/10.112…
Feb 16, 2022 18 tweets 9 min read
🚨WP:Examining psychology of misinformation around the globe🚨
Across 16 countries N=34k
➤Strong regularities in cognitive, social & ideological predictors of misinfo belief
➤Broad intervention efficacy (accuracy prompts, literacy tips, crowdsourcing)
psyarxiv.com/a9frz
1/ ImageImageImage A lot has been learned about psychology of misinformation/fake news, and what interventions may work - for overview, see @GordPennycook and my TICS review below:


BUT almost all of this work has been focused on the west- and misinfo is a GLOBAL problem!
Sep 13, 2021 6 tweets 4 min read
🚩Working paper🚩
DIGITAL LITERACY & SUSCEPTIBILITY TO FAKE NEWS

Lots of assumptions-but little data-out there on link b/w digital literacy & fake news

We find 2 diff digital lit measures predict ability to tell true vs false-but NOT sharing intent psyarxiv.com/7rb2m
1/ Image Lack of digital literacy is a favorite explanation in both public & academy for the spread of fake news/misinformation. But there's surprisingly little data investigating this, and results that do exist are mixed. One issue is that dig lit is operationalized in various diff ways Image
Sep 1, 2021 19 tweets 9 min read
🚨Out in @ScienceAdvances🚨
SCALING UP FACT-CHECKING USING THE WISDOM OF CROWDS

How can platforms identify misinfo at scale? We find small groups of laypeople can match professional factcheckers when evaluating URLs flagged for checking by Facebook!

science.org/doi/10.1126/sc…
1/ Fact-checking could reduce misinformation
➤ Platforms can downrank flagged content so fewer users see it
➤ Warnings reduce belief and sharing

⚠️But it doesn't SCALE⚠️
Fact-checkers can't keep up w vast quantity of content posted every day

(FCs also accused of liberal bias)
Mar 17, 2021 25 tweets 23 min read
🚨Out now in Nature!🚨
A fundamentally new way of fighting misinfo online:

Surveys+field exp w >5k Twitter users show that gently nudging users to think about accuracy increases quality of news shared- bc most users dont share misinfo on purpose
nature.com/articles/s4158…

1/ ImageImage Why do people share misinfo? Are they just confused and can't tell whats true?

Probably not!

When asked about accuracy of news, subjects rated true posts much higher than false. But when asked if theyd *share* online, veracity had little impact-instead was mostly about politics Image
Nov 1, 2020 8 tweets 5 min read
New WP for your doomscroll:

➤We follow 842 Twitter users with Dem or Rep bot
➤We find large causal effect of shared partisanship on tie formation: Users ~3x more likely to follow-back a co-partisan

psyarxiv.com/ykh5t/

Led by @_mohsen_m w/ @Cameron_Martel_ @deaneckles

1/ We are more likely to be friends with co-partisans offline & online

But this doesn't show *causal* effect of shared partisanship on tie formation
* Party correlated w many factors that influence tie formation
* Could just be preferential exposure (eg via friend rec algorithm)
Oct 8, 2020 14 tweets 7 min read
🚨Working paper alert!🚨
"Scaling up fact-checking using the wisdom of crowds"

We find that 10 laypeople rating just headlines match performance of professional fact-checkers researching full articles- using set of URLs flagged by internal FB algorithm

psyarxiv.com/9qdza/ Image Fact-checking could help fight misinformation online:

➤ Platforms can downrank flagged content so that fewer users see it

➤ Corrections can reduce false beliefs (forget backfires: e.g. link.springer.com/article/10.100… by @thomasjwood @EthanVPorter)

🚨But there is a BIG problem!🚨
Mar 24, 2020 14 tweets 6 min read
Today @GordPennycook & I wrote a @nytimes op ed

"The Right Way to Fix Fake News"
nytimes.com/2020/03/24/opi…

tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong

In this thread I unpack the many studies behind our op ed

1/
Platforms are under pressure to do something about misinformation. Would be simple to rapidly implement interventions that sound like they would be effective.

But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!

2/
Mar 17, 2020 15 tweets 11 min read
🚨New working paper!🚨

"Fighting COVID-19 misinformation on social media:
Experimental evidence for a scalable accuracy nudge intervention"

We test if an intervention we developed for political fake news works for #COVID19- seems like YES!

PDF: psyarxiv.com/uhbk9/

1/ Previously we found people share political misinfo b/c social media distracts them from accuracy- NOT b/c they cant tell true v false, NOT b/c they dont care about accuracy

So nudging them to think about accuracy improved quality of news they shared!


2/
Nov 17, 2019 11 tweets 5 min read
🚨Working paper alert!🚨 "Understanding and reducing the spread of misinformation online"

We introduce a behavioral intervention (accuracy salience) & show in surveys+field exp w >5k Twitter users that it increases quality of news sharing

psyarxiv.com/3n9u8

1/ We first ask why people share misinformation. It is because they simply can't assess the accuracy of information?

Probably not!

When asked about accuracy, MTurkers rate true headlines much higher than false. But when asked if theyd share online, veracity has little impact
2/