The final keynote of #bystander22 is by @jayvanbavel on "Morality in the Anthropocene".

We evolved in small-scale groups but is now connected in vast, social media network. What are the consequences? 1/17
As EO Wilson noted: "We have paleolithic emotions, medieval institutions and godlike technology". How can we manage our ancient emotions in the modern technology environment? Contrast with Zuckerberg's notion of "moving fast and breaking things". 2/17
57 % of all humans use social media. The average user uses it 2.2 hours per day. On average, 5.7 years of your life will be used on social media. All this has emerged fast. 3/17
Social media is an attention economy. What drives engagement - and what are the effects of this?

We don't know. Science is lagging behind the technological innovation. Data is lacking. knowledge is developed in segregated communities. This is a true interdisciplinary task. 4/17
But we know that the structure of social networks and their info flows are engineered to maximize profitability. And we know (from whistleblowers) that CEOs do not react on signals of problems. The focus is on profit. 5/17
Crockett (2017) showed that people mainly learned about moral and immoral acts from online sources. More so than from in-person sources and TV and print media. What we especially learn from online sources is about *immoral* acts. 6/17
Online sources trigger more moral outrage than legacy media & in-person sources. This is not always bad. Examples are #metoo & #blm. Another early example is Arab Spring, which was coordinated on social media. Also, dissent in Russia today is a current example. 7/17
But there are anti-democratic examples too, e.g., the Jan 6 insurrection.

The power of social media is that info can travel rapidly via loose networks. These collective processes can mobilize individuals. 8/17
Moojiman et al. show that moral rhetoric on social media can predict eruption of offline violence. 9/17
Why is this messaging spreading so fast?

The process is this: You read something which triggers you. You can quickly re-share, which triggers others. This create cascades.

These cascades are problematic when the info is false or harassing. 10/17
The three important factors according to The MAD Model are:

Motivation (e.g., group identity) - Attention (creating engagement) - Design (e.g., algorithms that amplifies certain content) 11/17
These features interact to shape what people share.

For example, when sharing an article, particular sentences are selected to maximizes attention, even if they are not representative. These may create a competition for attention by sharing more extreme content. 12/17
In a 2017-study, it was found that moral words in online words increased sharing dramatically. Emotions alone doesn't do this. It is the combination of morality and emotionality that drives online sharing. A later meta-analysis provides further support. 13/17
Why are moral emotions being spread? Using the "attentional blink task" in a 2020-paper, it is found that these words capture attention more. In a follow-up study, it was found that words that capture more attention in the lab are more likely to be retweeted on Twitter. 14/17
Importantly, the larger sharing of moral-emotional words only reflects increased sharing among like-minded people, i.e., the ingroup. This "echo chamber effet" isn't found for other types of language. This is both confirmed on Twitter and using experiments in the lab. 15/17
In fact, when people use more moral emotional words, people from the other group reacts negatively. 16/17
Alot of the research has been done in the US. We need a more global sense of what happens. 17/17

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Michael Bang Petersen

Michael Bang Petersen Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @M_B_Petersen

Jun 10
At #bystander22 @CecilieTraberg talks about the role of social processes in judgments of misinformation.

Prior research mainly look at headlines & individual traits. Yet, we do not consume info in a social vacuum. We need to bring the social into the study of misinformation. 1/8
Source cues matter hugely according to work on persuasion. Social identity theory also suggest that factors like group membership is key. 2/8
In Study 1, accuracy of fake headlines is rated as well as the slant and credibility of the sources. Is the source linked to the perceived accuracy of the headline? The results show that this is the case. 3/8
Read 8 tweets
Jun 10
At #BYSTANDER22 @rvinsroom presents on whether we can reduce the spread of online misinfo by assigning user reputations that signal credibility.

Spoiler alert: No! But this null-finding is demonstrated using a cool new interactive open-source experimental platform. 1/7
A problem is that there are limited consequences of engaging with misinformation. Is it possible to leverage social identity processes? People have a desire to make positive impressions. These impressions can be curated online. 2/7
It is possible to change social media to disincentivize interaction with false news via reputation score systems. An individual will score points by spreading credible info and loss points by doing the opposite. This can create "necessary friction" between people & info. 3/7
Read 7 tweets
Jun 10
At #BYSTANDER22 @A_Marie_sci presents our research on how moralization of rationality increases (!) motivations to share hostile misinformation.

The spread of hostile misinfo is problematic. But would more valuation of rationality help? That is the research question. 1/8
Rationality is often seen as the antidote to misinformation. But many conspiracy theorists talk alot about how they advance "facts", "data", "rationality", "critical thinking" and are not gullible sheeple (like the rest of us). 2/8
Here, we particularly suggest that *moralization* of rationality can be problematic and often may be a form of moral grandstanding. Moral grandstanding is fueled by status-seeking rather than truth-seeking. 3/8
Read 8 tweets
Jun 9
Now keynote at #BYSTANDER22 by @DG_Rand on the problem of misinformation and how polarization might solve the problem.

The key question is: How do we fight misinformation at scale? 1/19
Currently, platforms are using technical solutions such as machine learning etc. But there are limits to this solution. These limits often entails that human fact-checkers are brought in. This *does* work. Warning labels limits false news. 2/19
The problem with fact-checking is that it doesn't scale. How can we deal with misinformation at scale?

The solution is to turn towards the wisdom of the crowds (i.e., the finding that aggregations of average people's opinions are often very accurate). 3/19
Read 19 tweets
Jun 9
At #BYSTANDER22 @jrpsau presents our research on how an intervention by @SSTSundhed during the pandemic decreased false news sharing by boosting people's competence in spotting "fake news". 1/5
One intervention often recommended is "accuracy nudges". These assume that people have an intrinsic motivation to be accurate but leave people on their own re: how to spot "fake news".

In risk communication, however, the recommendation is always to give *actionable* advice. 2/5
According to Protection Motivation Theory, actionable advice boosts feelings of competence and efficacy that drives behavior. 3/5
Read 5 tweets
Jun 9
At #BYSTANDER22 @zeaszebeni presents on the profiles of "fake news" believers in Hungary.

Many different factors shape people's beliefs in disinformation. But most research is variable-centered. Here, a *person-centered* approach is used. 1/7
A person-centered approach focuses on whether different types of disinfo speaks to different people. This approach is here used in the polarized Hungarian context, where the term "fake news" is often used to delegitimize the other side. 2/7
295 participants were recruited. They rated the accuracy of news stories (true and false). Multiple factors related to trust were measured and then cluster analysis was applied. 3/7
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(