Facebook released its new CS Enforcement Report today. Couple of notes re Dangerous Orgs from the report & broader discussions: First, the vast majority of content removed for Dangerous Orgs, FB finds itself--99% for terrorism and 96.9% for hate orgs. transparency.facebook.com/community-stan… 1/n
Second, normal enforcement accounts for the vast majority of what FB removes in this space. In Q2, it was more than 12M pieces of content across Dangerous Orgs. Announcing removals around a specific org tends to get the most attention, but this ongoing "housekeeping" is key. 2/n
Third, we also said today that since Oct 2019, we've conducted 14 Strategic Network Disruptions impacting 23 organizations. Nine of those 14 addressed white supremacist orgs. We've previously only mentioned a couple of these. 3/n
Fourth, we do plan more regular updates about SNDs in the future, but the amount and exact timing will be balanced against operational risk and the risk of adversarial behavior. Will stop there. Thanks for reading. 4/4
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Okay, a couple of thoughts on Twitter. Presuming that Elon & Morgan Stanley want to make money from this whole thing, which is harder if you're getting fined all the time. So, I'm assuming that, despite Elon's rhetoric on Twitter, the company will still follow laws. 1/n
That means worst-of-the-worst CSAM still comes down, they still follow the new DSA and the TCO, they still remove copyright violations, they (probably) still follow FTO & SDGT lists. 2/n
Lots of bullying, hate speech, and hate group stuff is up in the air. But there's a market incentive to address these issues. 3/n
Important article on Telegram. Extremism researchers should read this. I have a couple of thoughts. The first is that it is a really useful story and I learned a lot. The second is that framing every tech platform story in terms of Facebook is unhelpful.
That second point is important because Telegram has been the central digital home for jihadist terrorist for years and tech media has paid it very little attention. Even now, in a genuinely useful article, the overarching frame is its relationship to FB.
Third (or fourth, I suppose), Telegram is a prime example of how harms manifest on platforms that aren't driven by Recommendations and Ads. Focusing narrowly on those issues leads toward joyful pile-ons of disfavored platforms but won't solve the underlying issues.
I told myself I wasn’t going to do this, but the whole Spotify/Rogan thing is annoying enough that I’m gonna break down and tweet. A couple of points:
Spotify has a $100 million contract with Rogan and, presumably, signed it because they can monetize his presence pretty dramatically. As a result, they have more editorial responsibility than FB or Twitter w/ organic content.
A question, whispered aside: from an ethical perspective, does this mean platforms would be more responsible to moderate content if they paid users for the value of their content?
[Work Thread] A version of Facebook’s Dangerous Organizations and Individuals list was leaked today. I want to provide some context, especially about our legal obligations, and point out some inaccuracies and mischaracterizations in the coverage. 1/n
First, Facebook does not want violence organized or facilitated on its platform and the DOI list is an effort to keep highly risky groups from doing that. It’s not perfect, but that’s why it exists. 2/n
Second, the leaked list is not comprehensive. That matters b/c the list is constantly being updated as teams try to mitigate risk. Like other tech comps, we havent shared the list to limit legal risk, limit security risks, & minimize opportunities for groups to circumvent rules.
[Work Thread Coming Up!] Today, Facebook has updated its public information about the Dangerous Individuals and Organizations policy, in line with recommendations from the Oversight Board made in early 2021. facebook.com/communitystand…
The policies aren’t changing, but we are providing more detail about them—in ways that have been a long time coming. I’m really glad we are taking this step. 2/
As a reminder, Facebook assesses DOI entities based on their behavior both online and offline. Most significant is an entity’s engagement with violence. Under various elements of the policy (some relatively new), we designate individuals, organizations, and networks of people. 3/
Since President Trump’s comments during last night’s debate we see an uptick on FB in content related to the Proud Boys including memes featuring his “Stand down, stand by” language. At this point, much of this content condemns the PBs & President Trump’s comments about them.
That said, when this is shared to support the Proud Boys, or other banned individuals, we’ve removed it and have already hashed memes to stop other people from continuing to share this content.
As a reminder, FB banned the Proud Boys in Oct 2018 so their accounts & other content praising or supporting them is prohibited. Enforcement in this adversarial space isn’t perfect, but the team has blocked hashtags, hashed images, & removed Accounts, Groups, Pages, Events, etc.