Americans report liking bipartisanship, but attitudes toward bipartisan issues quickly become polarized when associated w/ partisan identities. Lots of work showing this in lab setting, but hard to examine experimentally in real world. How do you randomize party cues outside lab?
We leverage a bipartisan issue that became
associated with a partisan identities suddenly in 2018: the presidential alert. Remember this thing? The Trump admin sent it to all Americans' phones in 2018.
If you don't, SNL did a great sketch on it:
The alert had bipartisan roots (Bush and Obama created it after Hurricane Katrina), but quickly became polarized before its inaugural test under Trump admin.
We rapidly recruited a sample of U.S. adults immediately
before the alert was sent, so that participants received the alert during the survey. We exploited the timing of the alert to randomize whether they answered questions about the alert moments before or after receiving it.
While prior research suggests that associating bipartisan issues with partisan identities polarizes attitudes, we find little evidence that receiving the
alert from the Trump admin elicited a partisan reaction. We looked at attitudes toward both the alert and privacy.
But maybe receiving the presidential alert from the Trump admin wasn't a strong enough cue. So, in the same study we ran a 2nd experiment--respondents were randomly assigned to receive
info explicitly associating the alert with either the Trump or Obama administration
But similar results here.
takeaway #1: Online surveys--especially those that draw respondents from many panels at once--provide opportunity to exploit timing of political events with large samples.
takeaway #2: We find little evidence that associating
the alert w/ the Trump administration had any polarizing effect on attitudes, even when using explicit partisan cues, suggesting that at least some bipartisan attitudes are not as easily polarized as prior work implies.
and here are the paper links (w/ lots of robustness checks) and Washington Post writeup:
How should researchers determine whether misinformation interventions work?
We argue that researchers should 1) measure whether people believe or share both false *and* true content and 2) assess efficacy using a measure of discernment 🧵
Who believes and shares misinfo? Why? What can we do about it?
Answering these questions requires measuring whether ppl believe and share misinfo
But studies purporting to answer the same question often use different research designs, inhibiting progress on combating misinfo
We provide a framework for differentiating between research designs on the basis of the normative claims they make about how people should interact with information
Then we show different designs lead to different conclusions about whether misinfo interventions work
But WHY do Reps share more fake news than Dems? Reps could be more *susceptible* to sharing fake news (more close-minded, less attentive to accuracy...)
Or, Reps could simply be more *exposed* to more fake news.
From social media data, we can't easily tell which it is...