Steve Rathje Profile picture
Jul 17 10 tweets 3 min read Read on X
🚨New paper in @TrendsCognSci 🚨

Why do some ideas spread widely, while others fail to catch on?

@Jayvanbavel and I review the “psychology of virality,” or the psychological and structural factors that shape information spread online and offline.

Thread 🧵(1/n) Image
While studies suggest that outrage and negativity go viral online, social media may not be so unique:
-Negative gossip and word-of-mouth marketing is also likely to spread.
-Negativity went “viral” in early newspapers and books.
Similar to how some viruses are more “contagious” than others, some forms of information appear to be more contagious than others across contexts.

The information-as-virus metaphor can be extended even further: Image
Underlying psychological processes (e.g., our tendency to attend to and remember negativity and high-arousal information) may explain why certain types of information go "viral" across contexts.
We review several studies in the virality literature. Most of them find that negativity and high-arousal emotions go viral. Yet, not all studies support this conclusion, and sometimes positivity goes viral. Why is that? Image
Structural features of an information environment (e.g., networks, norms, incentive structures) interact with our psychology to shape information spread, which may help explain conflicting findings.
The online world has unique structural features: for example, a small number of “superspreaders” spread the most hostility in all contexts, but hostile individuals have a much larger reach online due to larger networks and attention-maximizing social media algorithms.
This may explain, in part, why widely shared content is often not widely liked, a phenomenon we call the “paradox of virality” (): journals.sagepub.com/doi/abs/10.117…Image
Future work on virality should leverage recent advances in AI () to explore what goes “viral” across languages, cultures, and time periods. pnas.org/doi/10.1073/pn…Image
Check out the full paper here: authors.elsevier.com/c/1lRke4sIRvW-…Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Steve Rathje

Steve Rathje Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @steverathje2

Oct 3, 2024
In 2 digital field experiments, we found that unfollowing hyperpartisan influencers on Twitter:

-Reduced partisan animosity by 24%—with effects persisting for 6 months!
-Increased satisfaction with Twitter
-Led people to share higher-quality news

(thread)doi.org/10.31234/osf.i…Image
Research suggests that a small number of influential accounts (or “influencers”) contribute a lot of toxicity online.

In a new working paper, we set out to test the long-term *causal* impact of exposure to these hyperpartisan influencers. Image
We assembled politically-balanced lists of hyperpartisan Twitter influencers that we would later ask participants to unfollow.

We conducted a correlational study in a sample of survey data linked to Twitter data (n = 1,417) finding that:

1) Following these influencers is associated with partisan animosity
2) These influencers tend to use toxic language and share low-quality news
Read 17 tweets
Mar 6, 2023
🚨Out now in @NatureHumBehav 🚨

Across 4 experiments (n = 3,364), we found that motivating people to be accurate via a small financial incentive:

-Improved people’s discernment between true and false news
-Reduced the partisan divide in belief

nature.com/articles/s4156…
It is unclear whether belief in (mis)information is driven by a lack of knowledge or a lack of motivation to be accurate.

To help answer this question, we experimentally manipulated people’s motivations to see how this impacted their judgements of news headlines.
We found that providing people with very small financial rewards of up to $1 improved people’s performance at discerning between true and false headlines.

It also reduced the partisan divide in belief between Republicans and Democrats by 30%.
Read 16 tweets
Sep 30, 2022
🚨 New paper in @PNASNexus 🚨

We found that that following, retweeting, or favoriting low-quality news sources – and being central in a US conservative Twitter network – is associated with vaccine hesitancy (n = 2,064).

doi.org/10.1093/pnasne… ImageImageImage
There has been speculation that an “infodemic” of misinformation on social media is contributing to vaccine hesitancy.

We set out to test how one’s online information diet is associated with vaccine hesitancy by linking survey data to Twitter data. Image
In Study 1, we looked at various Twitter “influencers” and computed the mean levels of vaccine confidence among participants who followed them in both the United States and the United Kingdom. Image
Read 15 tweets
Jan 4, 2022
Now out in @PsychScience:

Our meta-analysis of all publicly available data on the "accuracy nudge" intervention found that accuracy nudges have little to no effect for US conservatives and Republicans. (1/9)

sage.figshare.com/articles/journ…
Our paper (with @roozenbot @CecilieTraberg @jayvanbavel & @Sander_vdLinden) is in response to recent set of @PsychScience & @Nature papers that find that nudging people to think about accuracy can reduce misinformation sharing:

nature.com/articles/s4158…
journals.sagepub.com/doi/10.1177/09…
Replicating prior work, we found that accuracy nudges significantly improved the quality of articles shared for Democrats in nearly all samples, but no significant effects were found for Republicans in *any* of the samples.
Read 10 tweets
Sep 15, 2021
In our recent @PNASNews paper, we suggested that Facebook's algorithm change in 2018, which gave more weight to reactions/comments, was rewarding posts expressing out-group animosity.

Recent reporting from the @WSJ finds that @Facebook was aware of this issue. Image
In our paper, we found that posts about the political outgroup (which tend to be very negative) receive much more overall engagement -- particularly in the form of "angry" reactions, "haha" reactions, comments and shares.

As shown below, the Facebook algorithm shift gave priority to the kind of engagement that we found was associated with out-group negativity (comments and reactions).

Read 5 tweets
Jun 23, 2021
🚨 Now out in @PNASNews 🚨

Analyzing social media posts from news accounts and politicians (n = 2,730,215), we found that the biggest predictor of "virality" (out of all predictors we measured) was whether a social media post was about one's outgroup.

pnas.org/content/118/26… Image
Specifically, each additional word about the opposing party (e.g., “Democrat,” “Leftist,” or “Biden” if the post was coming from a Republican) in a social media post increased the odds of that post being shared by 67%. Image
Negative and moral-emotional words also slightly increased the odds of a post being shared, positive words slightly decreased the odds, and in-group words had no effect.

Out-group words were by far the strongest predictor of virality that we measured. Image
Read 18 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(