I wanted to set the record straight about two stories run this week by @thewire_in with untrue claims about Meta’s content moderation operations and processes. tl;dr these stories are fabrications. (1/6)
The stories are simply incorrect about the cross-check program, which was built to prevent potential over-enforcement mistakes. It has nothing to do with the ability to report posts, as alleged in the article. (2/6)
In its October 10 story, @thewire_in links to a supposed internal report about the incident in question. It appears to be a fabrication. The URL on that "report" is one that’s not in use. The naming convention is one we don't use. There is no such report. (3/6)
In its October 11 story, @thewire_in cites a supposed email from @andymstone. It is a fake. The supposed email address from which it was sent isn’t even Stone’s current email address, and the "to" address isn't one we use here either. There is no such email. (4/6)
That same story makes reference to an internal journalist "watchlist." There is no such list. (5/6)
These accusations are outlandish and riddled with falsities. Let’s hope @thewire_in is the victim not the perpetrator of this hoax. (6/6)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I want to clear up a misconception about hate speech on Facebook. When combating hate speech on Facebook, bringing down its prevalence is the goal. (1/4)
That's why during every quarterly Community Standards Enforcement Report press call you hear me say: prevalence is the primary metric we should be held accountable to. It represents not what we caught but what we missed and people saw. (2/4)
The prevalence of hate speech on Facebook is now 0.05%, and is down by about half over the last three quarters. We can attribute a vast majority of the drop in prevalence in the past three quarters to our efforts. (3/4)
I want to address claims raised in today’s @WSJ story, which cites old research to falsely suggest we aren’t invested in fighting polarization. The reality is we didn’t adopt some of the product suggestions cited because we pursued alternatives we believe are more effective. 1/8
What’s undeniable is we’ve made significant changes to the way FB works to improve the integrity of our products. We fundamentally changed News Feed ranking to favor content from friends and family over public content even if this meant people would use our products less. 2/8
We reduce distribution of posts that use divisive and polarizing tactics like clickbait, engagement bait, and we’ve become more restrictive when it comes to the types of Groups we recommend to people. 3/8
New updates on our work to limit COVID-19 misinformation and connect people to reliable information: we’ve now directed over 2B people to health authority resources through our COVID-19 Info Center and pop-ups with >350M people clicking to learn more. about.fb.com/news/2020/04/c… 1/
Since the outbreak began, we’ve expanded our fact-checking partnerships to cover more than a dozen new countries. We’re now working with over 60 fact-checking organizations that review and rate content in more than 50 languages around the world. 2/
Based on a single fact-check, we match duplicates of debunked posts. In March we displayed warnings on about 40M posts on FB related to COVID-19, based on ~4000 fact-checker articles. When people saw those labels, 95% of the time they didn't view the original content. 3/