Why do some ideas spread widely, while others fail to catch on?
@Jayvanbavel and I review the “psychology of virality,” or the psychological and structural factors that shape information spread online and offline.
Thread 🧵(1/n)
While studies suggest that outrage and negativity go viral online, social media may not be so unique:
-Negative gossip and word-of-mouth marketing is also likely to spread.
-Negativity went “viral” in early newspapers and books.
Similar to how some viruses are more “contagious” than others, some forms of information appear to be more contagious than others across contexts.
The information-as-virus metaphor can be extended even further:
Underlying psychological processes (e.g., our tendency to attend to and remember negativity and high-arousal information) may explain why certain types of information go "viral" across contexts.
We review several studies in the virality literature. Most of them find that negativity and high-arousal emotions go viral. Yet, not all studies support this conclusion, and sometimes positivity goes viral. Why is that?
Structural features of an information environment (e.g., networks, norms, incentive structures) interact with our psychology to shape information spread, which may help explain conflicting findings.
The online world has unique structural features: for example, a small number of “superspreaders” spread the most hostility in all contexts, but hostile individuals have a much larger reach online due to larger networks and attention-maximizing social media algorithms.
This may explain, in part, why widely shared content is often not widely liked, a phenomenon we call the “paradox of virality” (): journals.sagepub.com/doi/abs/10.117…
Future work on virality should leverage recent advances in AI () to explore what goes “viral” across languages, cultures, and time periods. pnas.org/doi/10.1073/pn…
It is unclear whether belief in (mis)information is driven by a lack of knowledge or a lack of motivation to be accurate.
To help answer this question, we experimentally manipulated people’s motivations to see how this impacted their judgements of news headlines.
We found that providing people with very small financial rewards of up to $1 improved people’s performance at discerning between true and false headlines.
It also reduced the partisan divide in belief between Republicans and Democrats by 30%.
We found that that following, retweeting, or favoriting low-quality news sources – and being central in a US conservative Twitter network – is associated with vaccine hesitancy (n = 2,064).
There has been speculation that an “infodemic” of misinformation on social media is contributing to vaccine hesitancy.
We set out to test how one’s online information diet is associated with vaccine hesitancy by linking survey data to Twitter data.
In Study 1, we looked at various Twitter “influencers” and computed the mean levels of vaccine confidence among participants who followed them in both the United States and the United Kingdom.
Our meta-analysis of all publicly available data on the "accuracy nudge" intervention found that accuracy nudges have little to no effect for US conservatives and Republicans. (1/9)
Replicating prior work, we found that accuracy nudges significantly improved the quality of articles shared for Democrats in nearly all samples, but no significant effects were found for Republicans in *any* of the samples.
In our recent @PNASNews paper, we suggested that Facebook's algorithm change in 2018, which gave more weight to reactions/comments, was rewarding posts expressing out-group animosity.
Recent reporting from the @WSJ finds that @Facebook was aware of this issue.
In our paper, we found that posts about the political outgroup (which tend to be very negative) receive much more overall engagement -- particularly in the form of "angry" reactions, "haha" reactions, comments and shares.
As shown below, the Facebook algorithm shift gave priority to the kind of engagement that we found was associated with out-group negativity (comments and reactions).
Analyzing social media posts from news accounts and politicians (n = 2,730,215), we found that the biggest predictor of "virality" (out of all predictors we measured) was whether a social media post was about one's outgroup.
Specifically, each additional word about the opposing party (e.g., “Democrat,” “Leftist,” or “Biden” if the post was coming from a Republican) in a social media post increased the odds of that post being shared by 67%.
Negative and moral-emotional words also slightly increased the odds of a post being shared, positive words slightly decreased the odds, and in-group words had no effect.
Out-group words were by far the strongest predictor of virality that we measured.