➤We follow 842 Twitter users with Dem or Rep bot
➤We find large causal effect of shared partisanship on tie formation: Users ~3x more likely to follow-back a co-partisan
We are more likely to be friends with co-partisans offline & online
But this doesn't show *causal* effect of shared partisanship on tie formation
* Party correlated w many factors that influence tie formation
* Could just be preferential exposure (eg via friend rec algorithm)
So we test causal effect using Twitter field exp
Created bot accounts that strongly or weakly identified as Dem or Rep supporters
Randomly assigned 842 users to be followed by one of our accounts, and examined the prob that they reciprocated and followed our account back
3/
RESULTS!
➤Users were ~3x more likely to follow-back bots whose partisanship matched their own
➤Strength of bot partisanship didn't matter much
➤Dems & Reps showed equivalent level of tie formation bias (no partisan asymmetry)
4/
Shows strong causal effect of shared partisanship on actual social tie formation
➤Ecologically valid support for prior results from affective polarization survey exps
➤Suggests partisan psych drives homophily, s/t algorithmic help needed to increase cross-party connection
5/
What I find striking about these results is not so much that the effect exists per se, but rather how big it is
Also, nice how social media field exps can combine causal inference with ecological validity. V excited for to do more in this space, under lead of @_mohsen_m
These results are another stark reminder (as if we needed more right now) of the political sectarian that is gripping America- as described by @EliJFinkel led Science paper out this week science.sciencemag.org/content/370/65…
Happy doomscrolling everyone
(& of course, comments appreciated!)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong
In this thread I unpack the many studies behind our op ed
1/
Platforms are under pressure to do something about misinformation. Would be simple to rapidly implement interventions that sound like they would be effective.
But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!
2/
For example, its intuitive that emphasizing headline's publisher (ie source) should help people tell true vs false Low quality publisher? Question the headline.
But in a series of experiments, we found publisher info to be ineffective!
Previously we found people share political misinfo b/c social media distracts them from accuracy- NOT b/c they cant tell true v false, NOT b/c they dont care about accuracy
So nudging them to think about accuracy improved quality of news they shared!
Like everyone else we're losing sleep over #COVID19
To try to feel (slightly) useful, we decided to see how similar COVID-19 misinfo was to political misinfo from a cog psych perspective- and if the accuracy nudge we'd come up with might help fight COVID-19 misinfo online
We first ask why people share misinformation. It is because they simply can't assess the accuracy of information?
Probably not!
When asked about accuracy, MTurkers rate true headlines much higher than false. But when asked if theyd share online, veracity has little impact 2/
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?
Probably not!
Those same Turkers overwhelmingly say that its important to only share accurate information. 3/