We surveyed 150 experts on misinformation and identified areas of expert consensus regarding definitions of misinformation, its determinants, solutions, and the future of the field.
Experts defined misinformation as “False and misleading information” or “False and misleading information spread unintentionally”.
Qualitative experts were more likely to include intentionality in the definition than quantitative researchers.
Experts agreed that pseudoscience, conspiracy theories, lies, and deepfakes are misinformation, while satirical and parodical news are not.
There was less agreement and more variability between experts, regarding propaganda, rumors, hyperpartisan news, and clickbait headlines.
2) Determinants
The most popular reason why people believe & share misinformation was partisanship.
Identity, confirmation bias, motivated reasoning & low trust in institutions also received high levels of agreement, while education and access to reliable news did not.
3) Debated topics
Experts agreed that social media platforms worsened the misinformation problem, and that people are exposed to more opposing viewpoints online than offline.
Experts were divided on whether misinformation has increased in the past ten years.
The most polarizing statement was that “misinformation played a decisive role in the outcome of the 2016 U.S. election”
Political scientists were skeptical of this claim (73% disagreed and 14% agreed) whereas psychologists were not (26% disagreed and 54% agreed).
4) Solutions
Experts agreed that current interventions against misinformation, such as media literacy, labeling, or fact-checking, would be effective if deployed in the wild and widely adopted by social media companies or institutions.
Experts were in favor of most system-level interventions against misinformation, such as platform design changes, algorithmic changes, content moderation, de-platforming, and stronger regulations, while they were against shadowbans.
5) Future of the field
Experts agreed that in the future, it will be important to collect more data outside of the US, do more interdisciplinary work, examine subtler forms of misinformation, study platforms other than Twitter and FB, and develop better theories & interventions.
Ultimately, we hope these findings can help policymakers and platforms to tackle misinformation more efficiently, journalists to have a more representative view of experts’ opinions on misinformation, and scientists to move the field forward.
In Appendix we report the results of the survey broken down by disciplines and methods experts use to study misinformation.
If you’re interested in playing with the data, everything you need should be here: . Ask me if it’s not.osf.io/jd9xf/
And of course thanks a lot to all the researchers of the misinfo community who took part in this survey (for free) 🙏🥰
And many thanks to my amazing co-authors @berriche_manon @farkasjohan @hen_drik @steverathje2 😊
• • •
Missing some Tweet in this thread? You can try to
force a refresh
🚨 New preprint 🚨 With @hugoreasoning & Emma we identified a factor that could explain why people share news they suspect to be inaccurate (such as fake news): the ‘interestingness-if-true’ of a piece of news.
Think about it: how interesting would it be if COVID-19 was effectively a bioweapon released by China? Or if the moon was populated by “bison, goats, unicorns, bipedal tail-less beavers and bat-like winged humanoids”?
2/10
The relevance of a piece of information is not only determined by its plausibility but also the effect it would have if it were true. As long as someone is not entirely sure that COVID-19 is not a bioweapon, it has some relevance, and thus some sharing value.