New WP for your doomscroll:

➤We follow 842 Twitter users with Dem or Rep bot
➤We find large causal effect of shared partisanship on tie formation: Users ~3x more likely to follow-back a co-partisan

psyarxiv.com/ykh5t/

Led by @_mohsen_m w/ @Cameron_Martel_ @deaneckles

1/
We are more likely to be friends with co-partisans offline & online

But this doesn't show *causal* effect of shared partisanship on tie formation
* Party correlated w many factors that influence tie formation
* Could just be preferential exposure (eg via friend rec algorithm)
So we test causal effect using Twitter field exp

Created bot accounts that strongly or weakly identified as Dem or Rep supporters

Randomly assigned 842 users to be followed by one of our accounts, and examined the prob that they reciprocated and followed our account back

3/
RESULTS!

➤Users were ~3x more likely to follow-back bots whose partisanship matched their own

➤Strength of bot partisanship didn't matter much

➤Dems & Reps showed equivalent level of tie formation bias (no partisan asymmetry)

4/
Shows strong causal effect of shared partisanship on actual social tie formation

➤Ecologically valid support for prior results from affective polarization survey exps
➤Suggests partisan psych drives homophily, s/t algorithmic help needed to increase cross-party connection

5/
(Although of course not clear if it's actually beneficial to increase cross-party connection - @chris_bail et al suggest maybe not pnas.org/content/115/37…, @RoeeLevyZ suggests maybe yes papers.ssrn.com/sol3/papers.cf…)

6/
What I find striking about these results is not so much that the effect exists per se, but rather how big it is

Also, nice how social media field exps can combine causal inference with ecological validity. V excited for to do more in this space, under lead of @_mohsen_m
These results are another stark reminder (as if we needed more right now) of the political sectarian that is gripping America- as described by @EliJFinkel led Science paper out this week
science.sciencemag.org/content/370/65…

Happy doomscrolling everyone

(& of course, comments appreciated!)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with David G. Rand

David G. Rand Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @DG_Rand

24 Mar
Today @GordPennycook & I wrote a @nytimes op ed

"The Right Way to Fix Fake News"
nytimes.com/2020/03/24/opi…

tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong

In this thread I unpack the many studies behind our op ed

1/
Platforms are under pressure to do something about misinformation. Would be simple to rapidly implement interventions that sound like they would be effective.

But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!

2/
For example, its intuitive that emphasizing headline's publisher (ie source) should help people tell true vs false Low quality publisher? Question the headline.

But in a series of experiments, we found publisher info to be ineffective!

Details:

3/
Read 14 tweets
17 Mar
🚨New working paper!🚨

"Fighting COVID-19 misinformation on social media:
Experimental evidence for a scalable accuracy nudge intervention"

We test if an intervention we developed for political fake news works for #COVID19- seems like YES!

PDF: psyarxiv.com/uhbk9/

1/
Previously we found people share political misinfo b/c social media distracts them from accuracy- NOT b/c they cant tell true v false, NOT b/c they dont care about accuracy

So nudging them to think about accuracy improved quality of news they shared!


2/
Like everyone else we're losing sleep over #COVID19

To try to feel (slightly) useful, we decided to see how similar COVID-19 misinfo was to political misinfo from a cog psych perspective- and if the accuracy nudge we'd come up with might help fight COVID-19 misinfo online

3/
Read 15 tweets
17 Nov 19
🚨Working paper alert!🚨 "Understanding and reducing the spread of misinformation online"

We introduce a behavioral intervention (accuracy salience) & show in surveys+field exp w >5k Twitter users that it increases quality of news sharing

psyarxiv.com/3n9u8

1/
We first ask why people share misinformation. It is because they simply can't assess the accuracy of information?

Probably not!

When asked about accuracy, MTurkers rate true headlines much higher than false. But when asked if theyd share online, veracity has little impact
2/
So why this disconnect between accuracy judgments and sharing intentions? Is it that we are in a "post-truth world" and people no longer *care* much about accuracy?

Probably not!

Those same Turkers overwhelmingly say that its important to only share accurate information.
3/
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!