Since President Trump’s comments during last night’s debate we see an uptick on FB in content related to the Proud Boys including memes featuring his “Stand down, stand by” language. At this point, much of this content condemns the PBs & President Trump’s comments about them.
That said, when this is shared to support the Proud Boys, or other banned individuals, we’ve removed it and have already hashed memes to stop other people from continuing to share this content.
As a reminder, FB banned the Proud Boys in Oct 2018 so their accounts & other content praising or supporting them is prohibited. Enforcement in this adversarial space isn’t perfect, but the team has blocked hashtags, hashed images, & removed Accounts, Groups, Pages, Events, etc.
I’m very aware that researchers & journalists still find these folks on platform, but I also see how much is prevented from upload or removed before it’s identified publicly. This work is always ongoing. Check out the CSER for more on this work generally: transparency.facebook.com/community-stan…
In 2020, we’ve also conducted 3 strategic network disruptions against the Proud Boys - this is when our internal investigators identify a broader network of accounts and content the network is using, and remove it all at once.
The first SND was in February and our investigation into Proud Boy’s accounts led us to also identify a network tied to Roger Stone violating our policies for CIB - about.fb.com/news/2020/07/r…
The second SND was in May and June (two steps) when we identified Proud Boys accounts returning to FB and coordinating to attend and bring weapons to protests in the US.
The third SND was last week and targeted accounts returning to FB to promote and coordinate the rally in Portland last weekend.
Their tactics keep changing. The latest: when we remove content promoting their rallies, we see them replace “Proud Boys” with “Trump 2020” and “MAGA” on their content. When we determine this material is designed to support the Proud Boys, we remove it.
I know folks are also curious about the new Militarized Social Movements policy announced on Aug 19. Bottom line: we’ve identified more than 300 groups under this policy and removed more than 6500 Groups/Pages btw 8/19 and 9/15.
Does this mean the problem is solved? No, it does not mean that. There are still gaps and bad actors are always adapting techniques. We find many of these, but researchers and journalists sometimes identify those gaps first. I know you won’t be shy about pointing them out.
Finally, a key point. As a global society we ultimately had some success countering the Islamic State’s digital activity because there was a coalition of 80+ countries working the problem. Digital platforms stepped up, but the success online was a function of broader trends.
Social media companies must improve policies & enforcement re white supremacy & violence. But this problem demands a societal, governmental, and inter-governmental response. Companies should do more, but even perfection from digital platforms won’t solve this problem alone.
There is no one solution that fixes everything. ISIS has taken a beating but is still out there.
It takes a society collectively confronting hate and violence directly to push it back: creating new offramps for those tempted by it, mitigating the social ills that create disillusionment, investigating and punishing crimes when they occur.
Facebook has work to do to keep the platform safe, and get our piece of that wider puzzle right. That work continues and deserves attention; but we need to work together to uplevel the rest of the strategy as well. /End
Correction. A tweet earlier in the thread misquoted President Trump as saying re the Proud Boys, “Stand down, stand by.” He said, “Stand back and stand by.”
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Okay, a couple of thoughts on Twitter. Presuming that Elon & Morgan Stanley want to make money from this whole thing, which is harder if you're getting fined all the time. So, I'm assuming that, despite Elon's rhetoric on Twitter, the company will still follow laws. 1/n
That means worst-of-the-worst CSAM still comes down, they still follow the new DSA and the TCO, they still remove copyright violations, they (probably) still follow FTO & SDGT lists. 2/n
Lots of bullying, hate speech, and hate group stuff is up in the air. But there's a market incentive to address these issues. 3/n
Important article on Telegram. Extremism researchers should read this. I have a couple of thoughts. The first is that it is a really useful story and I learned a lot. The second is that framing every tech platform story in terms of Facebook is unhelpful.
That second point is important because Telegram has been the central digital home for jihadist terrorist for years and tech media has paid it very little attention. Even now, in a genuinely useful article, the overarching frame is its relationship to FB.
Third (or fourth, I suppose), Telegram is a prime example of how harms manifest on platforms that aren't driven by Recommendations and Ads. Focusing narrowly on those issues leads toward joyful pile-ons of disfavored platforms but won't solve the underlying issues.
I told myself I wasn’t going to do this, but the whole Spotify/Rogan thing is annoying enough that I’m gonna break down and tweet. A couple of points:
Spotify has a $100 million contract with Rogan and, presumably, signed it because they can monetize his presence pretty dramatically. As a result, they have more editorial responsibility than FB or Twitter w/ organic content.
A question, whispered aside: from an ethical perspective, does this mean platforms would be more responsible to moderate content if they paid users for the value of their content?
[Work Thread] A version of Facebook’s Dangerous Organizations and Individuals list was leaked today. I want to provide some context, especially about our legal obligations, and point out some inaccuracies and mischaracterizations in the coverage. 1/n
First, Facebook does not want violence organized or facilitated on its platform and the DOI list is an effort to keep highly risky groups from doing that. It’s not perfect, but that’s why it exists. 2/n
Second, the leaked list is not comprehensive. That matters b/c the list is constantly being updated as teams try to mitigate risk. Like other tech comps, we havent shared the list to limit legal risk, limit security risks, & minimize opportunities for groups to circumvent rules.
[Work Thread Coming Up!] Today, Facebook has updated its public information about the Dangerous Individuals and Organizations policy, in line with recommendations from the Oversight Board made in early 2021. facebook.com/communitystand…
The policies aren’t changing, but we are providing more detail about them—in ways that have been a long time coming. I’m really glad we are taking this step. 2/
As a reminder, Facebook assesses DOI entities based on their behavior both online and offline. Most significant is an entity’s engagement with violence. Under various elements of the policy (some relatively new), we designate individuals, organizations, and networks of people. 3/
I have a brief thread on the tragic shootings in Kenosha, based on findings from FB’s initial internal investigation. 1/n
Yesterday we designated the shooting as a mass murder and removed the shooter’s accounts from Facebook & Instagram. Per standard practice in these situations, we are also removing praise and support of the shooter and have also blocked searches of his name on our platforms. 2/n
None of the shooter’s accounts were reported by users prior to the shooting. We have found no evidence that suggests the shooter followed the Kenosha Guard Page or that he was invited on the Event Page they organized. 3/n