But to keep it that way, we need to keep getting better, because we know IO actors will evolve.
Just look at how many are using AI-generated profile pics, thinking it’ll help them hide.
Or using cutouts and co-opting authentic voices to amplify their ops.
(On the AI point, one of the ground-breaking moments we had at Graphika was investigating our first op that used large-scale generation of AI profile pics, in late 2019.
So the next challenge is: how do we all keep ahead, and make the environment less and less hospitable for influence operations?
There are great teams out there working on this. I’m proud of what we’ve achieved with the @DFRLab and @Graphika_NYC teams over the years, and all our friends and colleagues in this field.
They remain crucial voices.
But the people who can make the most difference in tackling IO are the platforms themselves, including by driving collaboration with researchers and journalists.
And when it comes to detecting and exposing IO, the Facebook team’s leading the way.
I’m delighted to join that team and work with some of the world’s top investigators, not just to detect the current threats, but to find ways to get ahead of future ones.
UK telecoms regulator @Ofcom just revoked the licence of Chinese state broadcaster CGTN to broadcast in the UK, arguing the licence is held by an entity which doesn't have editorial control, in breach of UK rules.
And this, just out from @MsHannahMurphy and @SVR13: questions about the hundreds of thousands of followers that the same Huawei Western Europe execs have.
I'll leave it to others to analyse the 800k+ accounts involved in these followings, but one anecdotal sidelight on the fake network of accounts that attacked Belgium: some of its other amplification came from glambots from a network that also boosted Huawei Europe.
Glambots = automated accounts that use profile pictures taken from glamour shoots and similar sources.
One sidelight on the Russian protests today: #Navalny is probably the single most consistent target of Russian disinfo and influence operations.
He's been a target for at least 8 years, by ops including the Internet Research Agency, Secondary Infektion, and the Kremlin.
Way back in September 2013, @Soshnikoff investigated the then newly founded Internet Research Agency, and reported that it had been trolling Navalny when he ran for Mayor of Moscow.
January 2014: op Secondary Infektion set up its most prolific persona, with a pic of Navalny’s face painted blue. It started out by attacking the Russian opposition.
The username, bloger_nasralny, is a toilet pun on his name.
Question for the #OSINT community: can anyone else find TikTok videos about protests for Navalny that become unavailable if you watch via a Russian server?
If you check TikTok for key hashtags about Navalny and the protests, some of the most popular videos don’t show up when browsing through a Russian VPN.