"The Right Way to Fix Fake News"
nytimes.com/2020/03/24/opi…
tl;dr: Platforms must rigorously TEST interventions, b/c intuitions about what will work are often wrong
In this thread I unpack the many studies behind our op ed
1/
But just because an intervention sounds reasonable doesn’t mean that it will actually work: Psychology is complex!
2/
But in a series of experiments, we found publisher info to be ineffective!
Details:
3/
The problem: Most false headlines never get checked (fact-checking doesnt scale) & users may see lack of warning as implying verification!
4/
But this can lead to people not just disbelieving false headlines, but also rejecting TRUE headlines (ie being generally suspicious)
link.springer.com/article/10.100…
5/
But also, intuitively UNappealing interventions may actually work well!
6/
But turns out layperson source ratings actually agree quite well with fact-checkers:
7/
1) Poll random/selected users rather than allowing anyone to contribute their opinion-Prevents coordinated attacks
2) Knowing ratings will influence ranking≠gamed responses-Most ppl dont care about politics
psyarxiv.com/z3s5k/
8/
For example, when people think more carefully, they are less likely to believe false headlines (but not less likely to believe true headlines)
9/
This is the case in survey experiments (eg looking at sharing intentions for false and true headlines about COVID-19)
10/
11/
Platforms need to do rigorous tests- and if they can show they are doing so, the public needs to be patient
The key: Platform transparency about evaluations they conduct internally, and collaboration with outside independent researchers who publish
12/
I hope FB, and other platforms, will do more of these!
13/
docs.google.com/document/d/1k2…
end/