🚨JUST OUT🚨
Quarterly threat report from @Meta’s investigative teams.
Much to dig into:
State & non-state actors targeting Ukraine;
Cyber espionage from Iran and Azerbaijan;
Influence ops in Brazil and Costa Rica;
Spammy activity in the Philippines... about.fb.com/news/2022/04/m…
We also took down an attempt by people linked to past Internet Research Agency activity.
They posed as an NGO and were active almost entirely off platform: they did try to create FB accounts in January, but were detected.
Throughout January and February, their website focused on accusing the West of human rights abuse. After the invasion, they focused more on Ukraine.
Ghostwriter is an op that specialises in hacking people’s emails, then trying to take over their social media from there.
We saw a spike in Ghostwriter targeting of the Ukrainian military after the invasion.
In a few cases, Ghostwriter posted videos calling on the Army to surrender as if these posts were coming from the legitimate account owners.
We blocked these videos from being shared.
Non-state actors focused on Ukraine too. We took down an attempt to come back by an actor running two websites posing as news outlets. Those sites were linked to a network we disrupted in December 2020.
As typical for critical world events, spammers focused on the invasion too. They used war footage to leverage people's’ attention to this war to make money.
Automated and manual systems have caught thousands of those.
Finally, we also took down a group in Russia that tried to mass-report people in Ukraine and RU, to silence them. Likely in an attempt to conceal their activity, they coordinated in a group that was ostensibly focused on cooking.
It’s worth making a bigger point here: we as a society have come a long way since 2014. The info ops research community is far more developed than it was back then, when there were just a handful of us doing this research.
On platform, we caught the recidivist ops early.
In the #OSINT community, there's more forensic expertise than ever.
🚨 TAKEDOWN 🚨
This weekend, we took down a relatively small influence operation that had targeted Ukraine across multiple social media platforms and websites. It was run by people in Russia and Ukraine: about.fb.com/news/2022/02/s…
It consisted of approx 40 accounts, Groups and Pages on FB and IG, plus on Twitter, YouTube, VK, OK, Telegram.
It mainly posted links to long-form articles on its websites, without much luck making them engaging. It got very few reactions, and under 4k followers.
It ran a few fake personas posing as authors. They had fake profile pics (likely GAN), and unusually detailed public bios - e.g. former civil aviation engineer, hydrography expert.
The op posted their articles on its websites and social media, & amplified them using more fakes.
Personal 🧵 based on years of OSINT research into influence operations since 2014.
Looking at the Russian official messaging on “de-nazification” and “genocide”, it’s worth putting them in context of the many different Russian IO that targeted Ukraine over the years.
* Iran, targeting the UK, focusing on Scottish independence;
* Mexico, a PR firm targeting audiences across LATAM;
* Turkey, targeting Libya, and linked to the Libyan Justice and Construction Party (affiliated w/Muslim Brotherhood).
It’s not the first time for an Iranian op to pose as supporters of Scottish independence.
In the past, FB found a page that copied and posted political cartoons about independence as far back as 2013. @Graphika_NYC writeup here (pages 26-27) graphika.com/reports/irans-…
* Expanding Crowdtangle IO archive to more researchers
* First public takedowns of brigading & mass reporting networks
* CIB takedown from Palestine (Hamas)
* Two CIB ops focused on Poland / Belarus migrant crisis (one from Belarus KGB)
* Op Swiss Rôle
First, deep dive: in July, a fake “Swiss biologist” persona on FB and Twitter accused the US of bullying the WHO over COVID origins, and was picked up by Chinese state media with amazing speed.
I appreciate this discussion bc it helps shine a light on the complexity of these problems. Two things to note as we all work to tackle inauthentic behavior & deception. 🧵
1. There’s a big behavioral difference between spammy amplification and complex IO;
2. Platforms traditionally approach each differently for a reason — each represents different behaviours and has different incentive structure.
Boosting shares and likes is a numbers game to make content look more popular than it is. It can be used on political content or fake sunglasses (or both).
🚨 JUST OUT: We took down a troll farm in Nicaragua, run by the Nicaraguan government and the FSLN party.
Our team’s research here: about.fb.com/news/2021/11/o…
Important terminology point: over the years, I’ve seen some confusion over what constitutes a “troll farm”, as opposed to clickbait/content farms.
Here’s how we understand it.
Two things to note on this operation:
1) This was the closest thing to a “whole-of-government” operation we’ve seen.
2) The troll farm lived across the internet: own media websites built on wordpress, blogspot, amplified on FB, IG, TikTok, Twitter, Telegram, YouTube, etc.