Last week the World Economic Forum published its 'Global Risk Report' identifying misinformation and disinformation as the *top global threats over the next two years*. In this essay I argue its ranking is either wrong or so confused it's not even wrong: conspicuouscognition.com/p/misinformati…
Some background: Since 2016 (Brexit and the election of Trump), policy makers, experts, and social scientists have been gripped by panic about the political harms of disinformation and misinformation. Against this backdrop, the World Economic Forum's ranking is not surprising.
Responses to the ranking were polarised. It's fair to say @NateSilver538 👇, who linked my essay in support of his view (), was not a fan. conspicuouscognition.com/p/misinformati…
Understandably, many leading misinformation researchers disagreed with Silver's assessment. They argued that misinformation *is* the top global threat because it causes or exacerbates all other threats, including war.
In my essay, I argue that on a narrow, technical understanding of dis/misinformation, the ranking seems completely wrong - bizarre, even. There is simply not good evidence that dis/misinformation, so understood, is a great threat, let alone the greatest.
In response to this, one might argue that this conclusion only follows from an overly strict definition of dis/misinformation. There is no doubt that many of humanity's problems arise in large part because people don't have good, accurate models of reality.
Maybe, then, we can just use the term 'misinformation' to refer to whatever factors cause people - whether ordinary people or those in positions of power - to have bad beliefs and make bad decisions.
As attractive as this line of reasoning it is, I argue that it is ultimately confused. It is - in the famous put-down of physicist Wolfgang Pauli - "not even wrong". There are three basic reasons for this:
1. Once we understand 'misinformation' to include subtle ways in which narratives, ideologies, and systems bias people's priorities and perceptions of reality, experts and elites at Davos are definitely not in a privileged position to identify misinformation.
2. Using labels like 'dis/misinformation' to refer to the extremely complex set of psychological, social, political, and institutional factors that cause people to hold bad beliefs and make bad decisions does not help us to understand the world or address global threats.
3. On extremely expansive understandings of the terms 'dis/misinformation', they are chronic features of the human condition and politics, and there is no reason to think they have gotten worse in recent years as a result of social media or AI. In some ways, they've improved!
That's all from me. Thanks for reading this far. You can subscribe to my weekly essays here: conspicuouscognition.com
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I'm going to be writing here 👇 ≈weekly this year. In this first post I introduce the blog, explain its title 'Conspicuous Cognition', and outline why the pursuit of social approval drives the evolutionary weirdness of human behaviour and thought. 1/7 conspicuouscognition.com/p/conspicuous-…
The blog is an experiment. I like writing, I like publicly engaging with ideas and debates, and now that I have a permanent academic job I can express and argue for controversial takes - which I will be doing - with less fear of consequences. 2/7
"Conspicuous cognition" is a play on Thorstein Veblen's term "conspicuous consumption", which refers to the consumption of goods and services not for practical or hedonic reasons but to signal one's wealth and status. 3/7
New essay: I argue that misinformation is often better viewed as a symptom of deep societal problems rather than their cause. When that’s true, interventions like debunking and censorship are unlikely to help – and might make things worse. (1/15) iai.tv/articles/misin…
The central intuition driving the modern misinformation panic is that people – specifically *other* people 👇 – are gullible and hence easily brainwashed into holding false beliefs. This idea is wrong. (2/15) journals.sagepub.com/doi/abs/10.117…
People are discriminating and suspicious learners, if anything placing too much weight on their own views relative to those of others. Persuasion is therefore extremely difficult and even intense propaganda campaigns and advertising efforts routinely have minimal effects 👇(3/15)
When it comes to dis/misinformation research, it confronts a dilemma: in order to be viewed as legitimate and win broad public support, it must restrict its focus to extremely clear-cut cases such as fake news, bizarre conspiracy theories and easily-demonstrable falsehoods. 1/11
The problem is that such content is rare and not very consequential. Many people ignore it or engage with it for reasons independent of belief, and the small minority of the population who seek it out do so because of pre-existing traits, beliefs and dispositions. 2/11
It's not purely epiphenomenal. It can have real effects which are worthy of study - but it simply does not warrant the enormous amount of attention and panic that the topic of misinformation currently receives. 3/11
According to this thread, anyone who is sceptical that emotionality (the tendency to evoke emotions) is a fingerprint of misinformation is simply ignorant of scientific consensus on the topic. I disagree and would like to explain why:
First, it's important to be clear about which claim is at dispute. If emotionality is a fingerprint of misinformation, misinformation on average must exhibit higher rates of emotionality than reliable information. That's a striking claim which if true has important implications
Here are some things that obviously *cannot* support this claim. It is not enough to point to some instances of misinformation and show that they exhibit emotionality. Lots of reliable information exhibits emotionality and lots of misinformation is dispassionate.
In many discussions distinguishing misinformation and disinformation, it’s assumed there is a clean distinction between innocent mistakes and intentional deception. This is wrong: people often sincerely embrace the self-serving beliefs they are motivated to propagate to others.
That someone is pushing an unfounded but self-serving narrative does not mean they are *insincere*. Your sincere beliefs, including those you most strongly identify with, often result from deep-rooted psychological tendencies specialised for propaganda and impression management.
Is misinformation a dangerous virus? Are we living through an infodemic? Is there a vaccine for misinformation? In this review of Sander van der Linden's (@Sander_vdLinden) new book 'Foolproof', I argue that the answer to these questions is "no". A thread: bostonreview.net/articles/the-f…
Before going into some objections, let me say that I recommend the book. It's extremely interesting, and it provides an exceptionally clear and informative overview of modern misinformation research. However, I do strongly disagree with its overall perspective.
According to the influential narrative of 'Foolproof', harmful false beliefs in society generally result from "infection" by a "misinformation virus" that has an intrinsic "DNA", which individuals can nevertheless be "inoculated" against by means of a "misinformation vaccine".