1/ Today, we shared an enforcement under a new protocol designed to extend our network disruptions to new types of adversarial groups. We still have a lot of work to do, but this is a first step into a new space about.fb.com/news/2021/09/r… 🧵
2/ We built our disruptions strategy on the idea that enforcement against tightly coordinated, highly adversarial groups could be more effective if we worked to identify and disrupt the network’s entire presence on our platform. A type of digital deterrence, if you will.
3/ We’ve used this strategy against 150+ CIB networks since 2017, and against sophisticated cyber-espionage actors. And it’s had results: we see some of these networks move off Facebook for less aggressive havens and their ops have less success on FB. about.fb.com/news/2021/05/i…
4/ When our security team set out to develop approaches to this threat space, we knew there were no single solutions here to address all potential adversarial problems. And of course, network disruptions are not a panacea, but they can be effective against the worst of the worst
5/ Over the past several months, we’ve been working with our colleagues across Facebook to expand our network disruption efforts to address threats that come from groups of primarily authentic accounts coordinating on our platform to cause serious harm.
6.5/ Today’s enforcement is under our new Coordinate Social Harm protocol.
7/ These challenges are complex & we need to be careful and deliberate when tackling coordinated efforts by authentic users to distinguish between people who organically come together to organize for social change and the types of adversarial networks that can cause social harm
8/ That means we we’re tightly constraining our enforcement to the worst of the worst behavior to avoid over-enforcing by removing people who don’t meet the pillars of this protocol: adversarial coordination, systemic policy violations, and severe social harm.
9/ Coordinated Social Harm typically involves networks of primarily authentic users who organize to systematically violate our policies to cause harm on or off our platform. This isn’t just about individual posting the same content about issues they passionately believe in.
10/ This is meant to complement our existing content policies, which already remove violating content and accounts, including for incitement to violence, bullying and harassment, or harmful health misinformation.
11/ However, coming at this from a threat disruption and security perspective, in some cases these content violations are perpetrated by a tightly organized group, working together to amplify their members’ harmful behavior and repeatedly violate our content policies.
12/ That means that the potential for harm caused by the totality of the network's activity far exceeds the impact of each individual post or account.
13/ To address these organized efforts more effectively, CSH enforcement protocol enables us to take action against the core network engaged in this behavior.
14/ Today’s enforcement was against a network that systemically engaged in violations of our hate speech, incitement to violence, and harmful health misinfo policies, and was associated with the Querdenken movement in Germany.
15/ This Querdenken-linked activity appeared to run across multiple internet services and the broader internet and typically portrayed violence as the way to overturn the pandemic-related government measures limiting personal freedoms.
16/ From public reporting, this group engaged in physical violence against journalists, police, and medical staff in Germany. On our platform, the network repeatedly violated our policies against harmful health misinfo, incitement of violence, bullying, harassment, & hate speech
1/ A few quick thoughts on reporting that equates clickbait farms with foreign troll farms seeking to manipulate public debate ahead of an election. The pages referenced here, based on our own 2019 research, are financially motivated spammers, not overt influence ops. 🧵
2/ Both of these are serious challenges, but they’re different. Conflating them doesn’t help anyone and plays into the hands of IO actors seeking to appear like they’re everywhere. You also can’t stop spammers w/defenses designed to counter overt IO, and vice-versa.
3/ We — and others across industry and research community — have built systems to tackle both of these issues. There is more to do, but there’s been important progress since this internal report.
1/ Today we shared our IO Threat Report, an analytical paper that dives into the 150+ CIB takedowns across 50+ countries that FB’s Threat Intel team discovered over the past 3 years. The report IDs adversary TTPs, trends, and provides recs for tackling IO: about.fb.com/wp-content/upl…
2/ We also released a summary dataset of all of our takedowns since 2017 alongside the report itself. Check that out here (at the end of the report) about.fb.com/wp-content/upl…
3/ We’ve reported on every CIB takedown since the advent of the CIB policy in 2017, but those reports tend to focus on the individual operations’ behavior and attribution. We felt it was important to also provide a strategic look at the ecosystem of IO uncovered between 2017-2020
These are some great suggestions for much-needed reform to the tech pipeline in government. I’d add just a few more from my 6 yrs in civil service -> reform the background check process, find ways to incentivize & compete for talent, abandon outdated performance models
We lose tons of candidates with vital tech and language proficiency in the multi-year wait for clearances. It suppresses diverse talent born overseas, people who have lived abroad, and weakens the federal talent pool. You won’t hire away from tech with 2yr waits for jobs
Gov is also unlikely to ever compete directly w/ private sctr on comp, but makes up for it with mission impact. That said: folks need to eat. Decouple tech from the GS scale or scale comp to enable new tech talent to pay rent, car payment, student loans, and save for retirement
@KembaWalden created and lead The Law of Election Security, a roundtable of cyber and elections lawyers from the private sector, and state and federal governments to think creatively on how to improve laws around elections - most recently focused on legislating digital forgeries.
I was an intelligence analyst before I left government. After the intelligence failures that led to Iraq, the IC restructured its analytic tradecraft to emphasize standard evidentiary requirements, confidence language, peer review and alternative analysis 1/
This was especially important because it let the community adapt into new areas of study - without a systemic way to identify bias and groupthink, any analytic community is bound to make bad conclusions when faced with new data 2/
As the disinfo research space grows, we need to think about ways to build industry-wide analytic standards before our Iraq War moment hits. 3/