1/ Today we’re announcing 5 networks removed for Coordinated Inauthentic Behavior in February: 2 networks from Iran targeting multiple countries, and domestic networks in Thailand, Morocco, and Russia about.fb.com/news/2021/03/f…
2/ The two Iranian networks focused on the middle east, as well as the UK and Afghanistan. They engaged on a range of topics, using tactics we’ve seen from other operations, and had limited reach — one operation had less than 15K followers for their assets, the other under 500.
3/ The Thai operation exhibited links to the Thai Military’s Internal Security Operations Command. It used fake accounts posing as individuals from the southern provinces of Thailand to criticize separatist movements and support the monarchy and military.
4/ The Russian operation used a network of bulk-created fake accounts to attempt to “poison” hashtags and geotags used by pro-Navalny protestors. The fake accounts were created as recently as January 2021, and were detected and disabled by our automation.
5/ The Russian operation is interesting b/c of its focus on pro-Navalny protests, and because of its use of bulk-created fake accounts (a tactic we see most commonly from financially-motivated fraudsters).
6/ But it wasn’t very effective. Their accounts were detected by our systems and blocked as they increased their activity, so while there were ~500 fake accounts associated w/the op, only a limited number were active at any time, and they were stopped before they could do much.
7/ It’s a good reminder that automated systems & expert investigative teams work best together. We’ll keep investing in both these efforts to tackle online deception, and making sure they continue to complement each other.
1/ This is excellent analysis from @2020Partnership on misinfo during the 2020 election. Having a team of independent researchers focused on election protection and online deception is a *huge* boon for the defender community. atlanticcouncil.org/in-depth-resea…
2/ We saw many of the trends that EIP called out in this report, including cross-platform spread -- narratives often originate with a few accounts, spread across multiple platforms as they gain popularity, and are even further amplified through traditional media coverage.
3/ I particularly appreciate their calls for clarity and consistency in rules from platforms, government and legislators, and their emphasis on the importance of getting proactive accurate information out ahead of deceptive narratives.
An excellent read from @lawfareblog. @C_C_Krebs did a number of important things, but this one was both subtle & critical: “Yet Krebs, along with a handful of others ... retained their reputations for telling the truth on foreign threats to the integrity of American elections.”
In this age of perception hacks and IO, perception of security *is* security. And no one will believe a system is secure without a trusted source of truth. Empowering voices to serve they role will be very hard in today’s low-trust reality, but that even more important.
We need a trusted, apolitical, insulated voice that can speak with authority to the American people, giving them an accurate assessment of risk and security, and armor them against perception hacks, whether foreign or domestic.
1/ Today we published our first Inauthentic Behavior (IB) report. This report details how we tackle various forms of IB and offers some examples of recent enforcements to illustrate notable trends and tactics we’ve seen _about.fb.com/news/2020/10/i…
2/ For 3+ yrs we’ve publicly reported our removals of CIB networks. These are like the APTs of #IO. But deceptive tactics are not limited to CIB — spammers and scammers often rely on similar behaviors. We tackle both threats, but we tackle them differently.
3/ CIB actors tend to be unrepentant deceivers — if you’re running a network of fake accounts, you know you’re being misleading. IB violators want to push the boundaries, but may not intend to break the rules.
1/ Today we announced 10 CIB takedowns, including 6 networks we removed during the month of September, and 4 that we removed as recently as this morning. We had already announced most of the Sept networks. about.fb.com/news/2020/10/r…
2/More than half of these 10 networks targeted domestic audiences in their countries and many of them were linked to groups and people linked to politically affiliated actors in each country — the US, Myanmar, Russia, Nigeria, The Philippines and Azerbaijan.
3/ Half of the takedowns in this report began based on our own internal investigations, and the other half are based on information published/shared by external groups, including the FBI and investigative reporters.
1/ Today we announced three CIB takedowns linked to Russian actors — all three had very limited global following, and even more minimal following in the US. But we know that networks like these can pivot in the the weeks to come, so we’ll stay vigilant. about.fb.com/news/2020/09/r…
2/ These networks centered primarily around off-platform websites designed to look like independent or fictitious media organizations and attempted to engage unwitting people to write for them. This is similar to a Russian network we removed in August. about.fb.com/news/2020/09/a…
3/ This is a good reminder that threat actors — including from Russia — will continue to try to manipulate public debate globally and in the US, including by trying to trick journalists into doing their amplification for them.
1/ There’s been an important debate today about an online campaign to inflate ticket sales at the Tulsa rally, and whether this constitutes deceptive behavior (cc @persily@evelyndouek). Based on public reporting, this isn’t CIB as we define it. #threadnytimes.com/2020/06/21/sty…
2/ First off, it’s critical to analyze this based on the behavior, not the content. However one might feel about the intent here, what was the behavior this campaign engaged in, and is that harmfully deceptive or simply coordinated?
3/ Second, I’m going to address this from a platform perspective. For FB, the key question would be: did the people behind it engage in on-platform behavior that systemically deceived users?