Jonathan Stray Profile picture
Jan 7 11 tweets 4 min read Read on X
Meta will stop paying pro fact checkers, and switch to a community notes system. In part, they say this is because of fact checker "bias."

I don't think this is good news. But it must be said that the fact checking community did this to itself.

How?
🧵

about.fb.com/news/2025/01/m…
In my view fact checking has two goals:
1) Identify harmful falsehoods
2) Create trust with the audience

The fact checking community mostly succeeded at #1. And they mostly failed at #2. They were mostly right, but many people no longer believed them.

Here's the evidence...
First, were fact checkers trusted? In the US it was largely split along partisan lines. Rep thought they were biased.

Unfortunately, there was also far more false information circulating on the Rep side. That is also very well documented...
poynter.org/ifcn/2019/most…Image
No one wants to review the evidence that there was more misinfo on the right than the left. If you're on the left, it seems obvious. If you're on the right, it just sounds biased.
But the evidence is very consistent across studies and methods. E.g.
science.org/doi/10.1126/sc…Image
Most of these studies use fact checker ratings as ground truth -- so it's easy to say the labels are biased!
Now it gets interesting. When you ask politically balanced groups of regular people to identify misinfo, they mostly agree with the pros!
In other words, the fact checking community was both
1) mostly correct in their calls
2) mostly distrusted by the people who were exposed to the most misinfo

And there was an obvious thing they could have done to fix #2. Namely: involve conservatives in the process.
The biggest factor which determines whether fact checkers are trusted or not is whether the fact check is coming from the outgroup.
No trusts their political opponents to play fair when determining truth. Nor should they, honestly!
Unfortunately, the professional fact checking community was essentially an appendage of the professional journalism community.
This meant rigorous standards, but it also meant almost everybody politically left-ish.
Here's American journalists' politics.
niemanlab.org/2024/12/newsro…Image
In short: there was a simple thing fact checking orgs could have done to build trust: hire conservative fact checkers. This wouldn't even have much changed the calls they made!
There is good evidence that this would have worked.
But it they could not or would not do it.
Is a community notes system going to do better? There are important questions of timeliness and reach. However, research shows community notes both
1) produces objectively good results, and
2) maintains trust in the people being fact checked
academic.oup.com/pnasnexus/arti…Image
Again, it's not good that Meta has stopped working with pro fact checkers. Although they only ever checked a tiny minority of content, having dedicated humans watching what's going viral is a very valuable service.

But this experiment failed. It could not generate trust.

/fin

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jonathan Stray

Jonathan Stray Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jonathanstray

Feb 14, 2024
Everyone has heard that there are serious problems with optimizing for engagement. But what are the alternatives?
New paper: we gathered folks from eight platforms for an off-the-record chat to share best practices -- and document them from public sources.
arxiv.org/abs/2402.06831
Image
𝑹𝒂𝒏𝒌𝒊𝒏𝒈 𝒃𝒚 𝒑𝒓𝒆𝒅𝒊𝒄𝒕𝒆𝒅 𝒆𝒏𝒈𝒂𝒈𝒆𝒎𝒆𝒏𝒕 𝒄𝒂𝒖𝒔𝒆𝒔 𝒔𝒊𝒈𝒏𝒊𝒇𝒊𝒄𝒂𝒏𝒕𝒍𝒚 𝒉𝒊𝒈𝒉𝒆𝒓 𝒕𝒊𝒎𝒆-𝒔𝒑𝒆𝒏𝒕 𝒂𝒏𝒅 𝒓𝒆𝒕𝒆𝒏𝒕𝒊𝒐𝒏

That's why everyone does it. And this isn't necessarily bad -- real user value here -- but sometimes it goes wrong. Image
𝑷𝒓𝒆𝒅𝒊𝒄𝒕𝒆𝒅 𝒆𝒏𝒈𝒂𝒈𝒆𝒎𝒆𝒏𝒕 𝒐𝒇𝒕𝒆𝒏 𝒉𝒂𝒔 𝒂 𝒏𝒆𝒈𝒂𝒕𝒊𝒗𝒆 𝒓𝒆𝒍𝒂𝒕𝒊𝒐𝒏𝒔𝒉𝒊𝒑 𝒘𝒊𝒕𝒉 𝒒𝒖𝒂𝒍𝒊𝒕𝒚 𝒔𝒄𝒐𝒓𝒆𝒔.

This is the widely-understood drawback. Quality could mean clickbait, inflammatory, harassment, misinfo, etc. Lots of evidence here. Image
Read 13 tweets
Jan 5, 2022
“When journalism begins to accept the death of objectivity, the industry will begin to thrive off relying on organic humanity rather than stiff, rigid and outdated mechanisms.”

I don't think "the death of objectivity" is the right direction. I see better alternatives.

1/x
Why are mainstream journalists gunning for "the death of objectivity"? The short answers:

- it was always a little bit of an incoherent concept
- it was used to exclude marginalized people and perspectives

There are real problems here. But. Reporting is more than opinion.

2/x
Notably, news audiences don't *wan't* the death of objectivity. Maybe they misunderstand what "objectivity" means. Fine, but attacking the process that is thought to produce truth without providing a reliable replacement is a loser. It shreds trust.

politics.ox.ac.uk/news/impartial…

3/x
Read 8 tweets
Dec 1, 2021
Instead of "amplification," here are three alternative ways of talking about what recommender systems do that correspond more closely to how this all works -- and would make better law.

1/x
When I push people on what "amplification" means, we always end up at one of three ideas:

1) Reach
2) Comparison to a chron baseline
3) User intent

2/x
Reach: sometimes there doesn't seem to be any semantic difference between "X was amplified" and "X was shown." If the purported harm comes from distribution per se, we should talk about all the ways people see things. E.g. China has laws penalizing widely shared posts.

3/x
Read 9 tweets
Sep 19, 2021
I think we expect far too much of mis/disinfo interventions. Too much of the work in this field rests on the insane implication that we could fix our politics if we just use mass surveillance tools to stop people from saying bad things to each other.
I want to qualify this by saying there really are organized information operations, content farms that churn out "news" that isn't, deceptive fraud and scams, etc. I'm not an absolutist by any means. But removing content isn't going change e.g. vaccine hesitancy.
I find "trust" a much more productive frame. Then the question becomes "why do people trust bad information" or even better, "why don't people trust good information?"

Institutional trust is a major latent axis of politics. Check out the data:
papers.ssrn.com/sol3/papers.cf… Image
Read 4 tweets
Jul 13, 2021
I wrote a paper containing everything I know about recommenders polarization, and conflict, and how these systems could be designed to depolarize.
arxiv.org/abs/2107.04953

Here's my argument in a nutshell -- a THREAD
First, what even is polarization? We all kinda know already because of the experience of living in a divided society, but why do we care? A few big reasons:
- it destroys social bonds
- it escalates through feedback cycles
- it contributes to the destruction of democracy
2/n
Many folks are interested in social media and violence, but it's important to distinguish between polarization (divided identities) and radicalization (separation from mainstream). This paper is about polarization -- perhaps a precursor to radicalization, but bad by itself.
3/n
Read 18 tweets
Feb 13, 2021
I have been studying disinformation since 2011. I am horrified by how the category has expanded from “factually wrong” to “expresses a frame I disagree with.” This is reflected in ML work, where no one looks too closely at where the labels in their training data comes from.
In complex categories like “disinformation” it’s quite important to understand the nature of the actual examples being used. Read the papers closely. Go look at the underlying data sets. For most disnfo classifiers, the labels are not even at the article level — just source level
This lesson is well known in other cases. For criminal justice risk assessment models, there has been tremendous focus (rightly so!) on the process that generates police arrest data. For disinformation, most in the field seem uninterested about where the data comes from.
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(