I appreciate this discussion bc it helps shine a light on the complexity of these problems. Two things to note as we all work to tackle inauthentic behavior & deception. 🧵
1. There’s a big behavioral difference between spammy amplification and complex IO;
2. Platforms traditionally approach each differently for a reason — each represents different behaviours and has different incentive structure.
Boosting shares and likes is a numbers game to make content look more popular than it is. It can be used on political content or fake sunglasses (or both).
The best way to tackle a numbers game like this is to counter it at scale via auto detection & removal — no matter the content.
Some accounts will always slip through the net, but billions don’t (FB reports that quarterly). Fakers adapt; the challenge is to keep adapting too.
Now back to complex IO: AFAIK, our research community has not seen examples of successful influence operations and foreign interference being built solely on fake likes and shares for years.
Efforts like that are basically one-trick ponies.
The IO we’ve seen recently have typically combined many different tactics.
Fake likes/shares can be one of the (less effective) ones, but there are so many others, e.g. backstopped cross-platform personas, fake media brands, or co-opting real people: about.fb.com/news/2021/05/i…
As @davidagranovich has already pointed out, it’s really important not to conflate the different kinds of inauthentic activity.
Even within influence ops, it’s important to assess their likely impact based on the available evidence.
🚨 JUST OUT: We took down a troll farm in Nicaragua, run by the Nicaraguan government and the FSLN party.
Our team’s research here: about.fb.com/news/2021/11/o…
Important terminology point: over the years, I’ve seen some confusion over what constitutes a “troll farm”, as opposed to clickbait/content farms.
Here’s how we understand it.
Two things to note on this operation:
1) This was the closest thing to a “whole-of-government” operation we’ve seen.
2) The troll farm lived across the internet: own media websites built on wordpress, blogspot, amplified on FB, IG, TikTok, Twitter, Telegram, YouTube, etc.
JUST OUT: In-depth report on the #Fazze case — a campaign from Russia targeting primarily India and LATAM, and to a lesser extent the US.
It was focused on the Pfizer and AstraZeneca COVID-19 vaccines, but got close to zero traction across the internet. about.fb.com/news/2021/08/j…
Full details in the report, but a couple of thoughts here.
All but one of the networks focused on domestic targets. That’s not unusual: influence operations so often start at home — remember our recent IO Threat Report?
Historically, even some operations that became (in)famous for foreign interference started domestically.
E.g. the early Russian IRA posted critical commentary about Navalny back in 2013, often on LiveJournal (h/t @soshnikoff)
👉Mexico, 1 network linked to local election campaigns, 1 linked to a local politician and a PR firm;
👉Peru, 1 linked to a local party and an advertising firm, 1 linked to a marketing entity;
👉Ukraine, 1 linked to people associated with the Sluha Narodu party,
And...
👉Ukraine, 1 network linked to individuals and entities sanctioned by the US Treasury — Andrii Derkach, Petro Zhuravel, and Begemot-linked media + political consultants associated with Volodymyr Groysman and Oleg Kulinich.
A range of behaviours here. Influence ops take many forms.
Fake a/cs posting to multiple pages to make content look popular
In-depth personas to seed geopolitical content
Large numbers of fakes to spam hashtags and geotags
GAN-generated faces, in bulk, but sloppily done.
First, the Thai Military’s Internal Security Operations Command.
About 180 assets, esp. active in 2020, posting news, current events, pro-military and pro-monarchy content, anti-separatist.