[Work Thread] A version of Facebook’s Dangerous Organizations and Individuals list was leaked today. I want to provide some context, especially about our legal obligations, and point out some inaccuracies and mischaracterizations in the coverage. 1/n
First, Facebook does not want violence organized or facilitated on its platform and the DOI list is an effort to keep highly risky groups from doing that. It’s not perfect, but that’s why it exists. 2/n
Second, the leaked list is not comprehensive. That matters b/c the list is constantly being updated as teams try to mitigate risk. Like other tech comps, we havent shared the list to limit legal risk, limit security risks, & minimize opportunities for groups to circumvent rules.
Third, the DOI policy is unusual for two reasons: actors are subject to it based on behavior off of Facebook, not just what they post on-platform, and in some cases we have a legal obligation to remove these actors. 4/n
That latter issue is really important and poorly understood. For example, FB has a legal obligation to follow U.S. law related to entities designated as foreign terrorist organizations, global terrorists and other sanctioned parties. 5/n
In our Community Standards, we’ve made it clear where these legal requirements overlap with our DOI policy. 6/n
Defining & identifying Dangerous Orgs globally is extremely difficult. There are no hard & fast definitions agreed upon by everyone. Take terrorism. Government lists reflect their political outlook and policy goals. And even government agencies define the problem differently. 7/n
Likewise, the UN hasn’t settled on a universal definition of terrorism. The same problem applies across all the other DOI sub-categories. That’s why we make our definitions public in the Community Standards and it’s why we go beyond (and don’t simply follow) government lists. 8/n
Many of the groups listed in the document are subsidiaries or media wings of larger entities. This is particularly true with well-established terrorist groups like ISIS and al-Qaeda, for which we have documented hundreds of individual entities. 9/n
That matters because in a superficial analysis this structure dramatically skews the overall number of entities from a particular region. 10/n
It’s important that FB document each ISIS Wilayat to facilitate enforcement, but counting each separately to support the argument that the overall list is biased is misleading. 11/n
That kind of analysis generates heat, not light, and is an inaccurate and misleading reflection of the groups on our ‘Tier one’ list. 12/n
In general, this sort of superficial analysis does not reflect the reality of our legal obligations or the fact that hundreds of the listed entities are derivative of our designations of uncontroversial orgs like ISIS and al-Qaeda. 13/n
The data analysis matters because The Intercept’s story uses those misleading figures to suggest that FB doesn’t take white supremacist groups seriously. That is wrong. 14/n
More than 250 white supremacist entities are designated under our most restrictive policy as hate groups. Tier 1. That means no ‘praise, support and representation’. 15/n
In 2020 we expanded our DOI policy to include ‘Militarized Social Movements’, including militias. As far as we know, this is also the most comprehensive list of its kind and reflects the legitimate concern we had regarding potential political violence in 2020. 16/n
But if a would-be ‘MSM’ meets the bar to be designated under our most restrictive policies they are. For example, the Proud Boys were listed as a hate org in 2018. 17/n
I don’t want to suggest that FB’s Dangerous Orgs list, or its enforcement, is perfect. It isn’t. We don’t get to every organization as quickly as we’d like and the policy, as we’ve long stated, is blunt. Enforcement is not perfect, in part because groups are adversarial. 18/n
Sometimes discussion of content moderation gets framed as if there are permanent, perfect solutions. That’s not right. Every day it is a matter of adversarial adaptation. We can’t get rid of these groups completely, but we can make it harder for them to operate. 19/n
Finally, I want to reflect on something here that I wrote to the internal teams yesterday. 20/n
I don’t condone this leak. In the aggregate it makes everything harder. There will be criticism. But we’ll use it as an opportunity to get better. 21/n
That means hard questions internally and more focused discussions with stakeholders, who often have valuable points of view. For example, the nuance we’ve added to our “Violent Non-State Actors” policy is a result of those past discussions. 22/n
In this process, we’re going to learn about gaps in designation & enforcement, & places where the policy could, & perhaps should, have more nuance. This is positive & we will use any learnings to improve FB as a platform & support more productive communities online and off. /end

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Brian Fishman

Brian Fishman Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @brianfishman

23 Jun
[Work Thread Coming Up!] Today, Facebook has updated its public information about the Dangerous Individuals and Organizations policy, in line with recommendations from the Oversight Board made in early 2021. facebook.com/communitystand…
The policies aren’t changing, but we are providing more detail about them—in ways that have been a long time coming. I’m really glad we are taking this step. 2/
As a reminder, Facebook assesses DOI entities based on their behavior both online and offline. Most significant is an entity’s engagement with violence. Under various elements of the policy (some relatively new), we designate individuals, organizations, and networks of people. 3/
Read 16 tweets
30 Sep 20
Since President Trump’s comments during last night’s debate we see an uptick on FB in content related to the Proud Boys including memes featuring his “Stand down, stand by” language. At this point, much of this content condemns the PBs & President Trump’s comments about them.
That said, when this is shared to support the Proud Boys, or other banned individuals, we’ve removed it and have already hashed memes to stop other people from continuing to share this content.
As a reminder, FB banned the Proud Boys in Oct 2018 so their accounts & other content praising or supporting them is prohibited. Enforcement in this adversarial space isn’t perfect, but the team has blocked hashtags, hashed images, & removed Accounts, Groups, Pages, Events, etc.
Read 17 tweets
27 Aug 20
I have a brief thread on the tragic shootings in Kenosha, based on findings from FB’s initial internal investigation. 1/n
Yesterday we designated the shooting as a mass murder and removed the shooter’s accounts from Facebook & Instagram. Per standard practice in these situations, we are also removing praise and support of the shooter and have also blocked searches of his name on our platforms. 2/n
None of the shooter’s accounts were reported by users prior to the shooting. We have found no evidence that suggests the shooter followed the Kenosha Guard Page or that he was invited on the Event Page they organized. 3/n
Read 7 tweets
11 Aug 20
Facebook released its new CS Enforcement Report today. Couple of notes re Dangerous Orgs from the report & broader discussions: First, the vast majority of content removed for Dangerous Orgs, FB finds itself--99% for terrorism and 96.9% for hate orgs. transparency.facebook.com/community-stan… 1/n
Second, normal enforcement accounts for the vast majority of what FB removes in this space. In Q2, it was more than 12M pieces of content across Dangerous Orgs. Announcing removals around a specific org tends to get the most attention, but this ongoing "housekeeping" is key. 2/n
Third, we also said today that since Oct 2019, we've conducted 14 Strategic Network Disruptions impacting 23 organizations. Nine of those 14 addressed white supremacist orgs. We've previously only mentioned a couple of these. 3/n
Read 4 tweets
3 Aug 20
This is a useful thread regarding the GIFCT. Folks should read it. I'm not going to speak for GIFCT, which is now an independent org, but a couple of thoughts:
1) Facebook reviews every hash it takes from the database against our community standards; 2) GIFCT has been clear that, to date, companies share hashes related to entities sanctioned by the UN;

2/n
3) Participating in the hash-sharing consortium has never meant companies agree to treat those hashes in the same way;
4) Facebook's internal systems are responsible for far more enforcement against terrorist entities than the shared hash database; 3/n
Read 6 tweets
12 May 20
Facebook's new Community Standards Enforcement Report (CSER) is out. This is the first time we are releasing data on removal of Hate Orgs. (1/n)

ow.ly/QBlr50zE8x0
We removed 1.6M pieces of content in Q42019 and 4.7M in Q12020. That bump is a result of bringing online more tools and expanding proactive detection to a wider range of groups. (2/n)
The proactive detection rate for Hate Orgs is good (but not good enough, from my perspective): 89.6% in Q42019 and 96.7% in Q12020. You can read more about that work in a dedicated blogpost, here: about.fb.com/news/2020/05/c…
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(