[Work Thread Coming Up!] Today, Facebook has updated its public information about the Dangerous Individuals and Organizations policy, in line with recommendations from the Oversight Board made in early 2021. facebook.com/communitystand…
The policies aren’t changing, but we are providing more detail about them—in ways that have been a long time coming. I’m really glad we are taking this step. 2/
As a reminder, Facebook assesses DOI entities based on their behavior both online and offline. Most significant is an entity’s engagement with violence. Under various elements of the policy (some relatively new), we designate individuals, organizations, and networks of people. 3/
So, what’s new? First, we are illustrating how these designations fall into three tiers that correspond with different enforcement paradigms. 4/
Entities in the Tier 1 category may not be Praised, Substantively Supported, or Represented on FB, though we do allow criticism, humor, and informational discussion. Entities in Tier 2 may not be Substantively Supported or Represented, nor Praised for violent activities. 5/
Tier 3 entities may not coordinate (Groups, Pages, dedicated Profiles, etc) on platform, but we don’t remove all content regarding them. 6/
Second are public definitions of a range of categories under the DOI policy. All are terms of art. You’ll see: Hate Banned Entities, Violent Non-State Actors, Violence Inducing Conspiracy Networks, & Militarized Social Movements. All of the new ones fall into Tier 2 or Tier 3. 7/
Third, we are providing definitions for “Praise,” “Substantive Support,” and “Representation,” which are also terms of art. (“Substantive Support” is the same idea as the historical use of the word “Support.”) 8/
Now, a couple of thoughts: The FB DOI policy has long been the bluntest and most aggressive in industry. Over time, the policy has grown more nuanced to allow Praise of some political actors and groups engaged in insurgency without targeting civilians. 9/
The DOI space is fundamentally adversarial. Balancing transparency while dangerous groups adjust tactics is difficult and we do it imperfectly. 10/
Our internal teams catch more of these shifts than folks realize, but research tracking these changes is really valuable. When you point out the shifts that lead to enforcement misses: that’s productive criticism. Thank you. 11/
Concern about adversarial shifts does impact FB’s approach to transparency on DOI. We don’t want to pass information that will help dangerous groups avoid detection and facilitate harm. They constantly adjust as is. 12/
I know some of you will ask about other OB requests, including re DOI policies. Please refer to FB's official responses on those. 13/
Finally, I want to note that external folks don’t always know when they’ve had an impact on the policy deliberations inside FB. Sometimes you’re consulted formally; sometimes internal folks are just reading your work. Please keep the thoughtful stuff coming. 14/
The OB is a key voice, & they had an important impact here. But so too did many researchers & others calling for disclosure, incl many internal colleagues. Thanks to everyone doing serious work on these issues. It’s not always a straight line, but you are having impact. /End
Actually, one post-script: This is an important update, but it doesn't mean we're done. The world is changing & the structure of extremist and militant entities is evolving. Like everyone else, FB will wrestle with those shifts and need to adjust.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Okay, a couple of thoughts on Twitter. Presuming that Elon & Morgan Stanley want to make money from this whole thing, which is harder if you're getting fined all the time. So, I'm assuming that, despite Elon's rhetoric on Twitter, the company will still follow laws. 1/n
That means worst-of-the-worst CSAM still comes down, they still follow the new DSA and the TCO, they still remove copyright violations, they (probably) still follow FTO & SDGT lists. 2/n
Lots of bullying, hate speech, and hate group stuff is up in the air. But there's a market incentive to address these issues. 3/n
Important article on Telegram. Extremism researchers should read this. I have a couple of thoughts. The first is that it is a really useful story and I learned a lot. The second is that framing every tech platform story in terms of Facebook is unhelpful.
That second point is important because Telegram has been the central digital home for jihadist terrorist for years and tech media has paid it very little attention. Even now, in a genuinely useful article, the overarching frame is its relationship to FB.
Third (or fourth, I suppose), Telegram is a prime example of how harms manifest on platforms that aren't driven by Recommendations and Ads. Focusing narrowly on those issues leads toward joyful pile-ons of disfavored platforms but won't solve the underlying issues.
I told myself I wasn’t going to do this, but the whole Spotify/Rogan thing is annoying enough that I’m gonna break down and tweet. A couple of points:
Spotify has a $100 million contract with Rogan and, presumably, signed it because they can monetize his presence pretty dramatically. As a result, they have more editorial responsibility than FB or Twitter w/ organic content.
A question, whispered aside: from an ethical perspective, does this mean platforms would be more responsible to moderate content if they paid users for the value of their content?
[Work Thread] A version of Facebook’s Dangerous Organizations and Individuals list was leaked today. I want to provide some context, especially about our legal obligations, and point out some inaccuracies and mischaracterizations in the coverage. 1/n
First, Facebook does not want violence organized or facilitated on its platform and the DOI list is an effort to keep highly risky groups from doing that. It’s not perfect, but that’s why it exists. 2/n
Second, the leaked list is not comprehensive. That matters b/c the list is constantly being updated as teams try to mitigate risk. Like other tech comps, we havent shared the list to limit legal risk, limit security risks, & minimize opportunities for groups to circumvent rules.
Since President Trump’s comments during last night’s debate we see an uptick on FB in content related to the Proud Boys including memes featuring his “Stand down, stand by” language. At this point, much of this content condemns the PBs & President Trump’s comments about them.
That said, when this is shared to support the Proud Boys, or other banned individuals, we’ve removed it and have already hashed memes to stop other people from continuing to share this content.
As a reminder, FB banned the Proud Boys in Oct 2018 so their accounts & other content praising or supporting them is prohibited. Enforcement in this adversarial space isn’t perfect, but the team has blocked hashtags, hashed images, & removed Accounts, Groups, Pages, Events, etc.
I have a brief thread on the tragic shootings in Kenosha, based on findings from FB’s initial internal investigation. 1/n
Yesterday we designated the shooting as a mass murder and removed the shooter’s accounts from Facebook & Instagram. Per standard practice in these situations, we are also removing praise and support of the shooter and have also blocked searches of his name on our platforms. 2/n
None of the shooter’s accounts were reported by users prior to the shooting. We have found no evidence that suggests the shooter followed the Kenosha Guard Page or that he was invited on the Event Page they organized. 3/n