Carl Miller Profile picture
Mar 18, 2022 27 tweets 12 min read Read on X
When we say Kyiv is winning the information war, far too often we only mean information spaces we inhabit.

Pulling apart the most obvious RU info op to date (as we did using semantic modelling), very clear it is targeting BRICS, Africa, Asia. Not the West really at all. Image
This is the kind of thing this network shares by the way. Mainly an amplification network pumping a small number of viral pro-invasion meme, largely around themes of western hypocrisy, NATO expansionism and BRICS solidarity, ImageImageImageImage
These were the accounts receiving most inward amplification - the higher value accounts actually sending the virals. You'll see they, too, are spread across Asian and African languages + identities. Image
A slightly different rendering that should let you zoom in to see account locations better. V. linguistic concentrations especially with the pro-BJP/Hindi accounts, and also South Africa, connected with longer, looser tunnels of asian/indian accounts that tend to mix in English Image
Some of these clusters, like dark green, have accounts which are quite old. The blue cluster is different from all the others - almost no account creation and then BANG almost all accounts created very recently. ImageImage
Clear across all of them is this: they all snapped into action across March 2nd-3rd. These are when the pro-invasion memes all trended.

There is difference in how much non-Russian messaging each cluster shares. 'Keywords' here are ones like 'Putin' and 'Ukraine'. ImageImageImageImage
For anyone who wants to dive more into this network, i'll be sharing more info about it today.

Let's start where we left yesterday. Really the point of this network was to concentrate Retweets to a small number of pro-invasion memes/virals, marked on the map here. Image
The clusters are distinct from each other in other ways than language. The red, bright green and beige clusters send WAY more retweets than the others. avg. ratio is about 4:1, whereas orange is 0.5.

The accs in the middle of red/beige/green only send retweets. Prolly RT farms Image
Let's jump into the clusters. Disclaimer: I'm going to use real accounts as examples here. I'm not suggesting they're certainly bots or Russian info agents; just that behaviourally they are very similar to a network that - in its entirely - is extremely suspicious.
The reds look like this. A tight, dense pocket of pro-BJP, hindi-language accounts.

566 in total, sending 4M messages. The spammiest is my reading: highest retweet:tweet ratio of any cluster. Weren't bothered at all by Russia until March 2nd. ImageImage
The blues are different from any other cluster. English language and very little in the way of clear locality or regional focus. The youngest accounts, 5M messages and almost all of it - from what I see - is pro-invasion messaging. Almost no followers.

A separate operation imo Image
Orange is a tunnel of accounts that kind of connect the Hindi-cluster to the SA cluster. 736 mainly using Urdu, Javanese, Nepali, Malay. Each one averages 3.4k messages, only has avg. 23 followers. Again, 'activates' in volume over March 2nd (anyone spotting the pattern?). ImageImage
The yellow cluster is really interesting. Clearly South African, lots of pro-Zuma, BRICS-solidarity messaging. Highest number of original messages and avg. followers, many of these 1010 accounts are real imo

This is where the artificial campaign got the most organic take-up imo ImageImage
There are some accounts in each of these networks that have been around for years, but each cluster has a similar profile: a lot of accounts were created very recently.

Check out blue, especially ImageImageImageImage
The DARKGREENS are a linguistic cluster entirely unto themselves. 474 accounts using Urdu, Sindhi, Farsi: lots of pro Imran Khan/PTI messaging.

By far the highest (mean average) followers in this analysis: 3.5k. There are some very big, very visible accounts here. ImageImage
In many ways, I've found VIOLET the hardest to characterise.

- The biggest cluster (1441)
- Easily the most messages (12M)
- Most messages per user (8.6k)
- 2nd highest followers per user (220)
- Oldest accs (avg. 2017)

Currently concerned with Nigerian fuel shortages. Image
This just leaves the BRIGHT GREENS. It looks like a tunnel cluster but isn't really; the majority of accounts are bunched up next to the Hindi-language pinks

These are Indian accounts too, but tend to use more English. 1314 in total sending 6.8M messages. Avg. only 18 followers ImageImage
The point of doing this network mapping wasn't just to describe this particular campaign, but also discovery. We're now swinging towards the less researchable and probably more harmful activity across all the other social media platforms we can reach.
There's loads of us over at CASM working on social media research method.Special citation here to Chris Inskip, who did the semantic mapping using DistilRoberta - trained as a sentence similarity model. That's the topic of his PhD, which will be sensational when it's done.
People have asked for some more examples of the messaging. Whilst i don't like to amplify, it is important to show the rhetorical positioning that's being used here. ImageImageImage
So there's a lot of media attention on this work, which is great. But i'd like to clarify two things:

(1) Does the data definitively point towards the Russian state? No, that's not what data science can do. Twitter has taken down some of the accounts for 'coordinated...
inauthentic behaviour',but exactly who that is becomes a judgement. Contextually, and in terms of the techniques likely used, my judgement is that it is a pro-Russian, pro-invasion operation;but that's my impression as a researcher who spends their time pulling apart these things
(2) Bots... are they all bots? We became interested in this exactly because of all the amazing research that pointed in inauthenticity:





medium.com/dfrlab/istandw…

isdglobal.org/digital_dispat…
Our work also uncovered some patterns you very rarely see with organic activity.

- A lot of accounts made on the day of the invasion
- High engagement from accs with no followers
- Substantial overlaps in sharing activity
- v. High retweet:tweet ratios
But again, there was a lot of research on this. And none of that can ever definitely say what's a bot and what isn't - they just notice suspicious patterns.

IMO there's some automation here. But also compromised accounts, human ones and some which flip back and forth
Our research wasn't primarily aimed at that question either. It was - on the basis of the research already looking at all of that, and on the basis of the takedowns that had happened - interested in the nature of these accounts and what they might say about strategy and targets
Hey there folks o/

If anyone is interested, here's a White Paper explaining a lot more of the methods and findings (not to mention nuances and caveats) underlying our research here.

casmtechnology.com/case-studies/d…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Carl Miller

Carl Miller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @carljackmiller

Aug 21, 2023
*Thrilled* to announce that I and colleagues been awarded one of the grants by OpenAI to try to democratise the development of artificial intelligence. The idea is called The Recursive Public, a quick thread on it below and an invitation to be part of it.
The idea is to use a consensus-seeking process pioneered in Taiwan to host an online deliberation. The mission is simple: to set the agenda. What're the things we really need to think about in order to control AI?

recursivepublic.com
It's called vTaiwan and that community are one of the key collaborators in the project. I'm a huge fan of what they've achieved - on polarised issues like Uber regulation or online alcohol sales, they found consensus across different groups. bbc.com/news/technolog…
Read 6 tweets
Feb 9, 2023
1/The EU is now talking about 'foreign information manipulation and interference'. Despite the fact it creates the acronym FIMI, this is a huge step forwards. I'll briefly give my view on why.

First, disinformation is a horrible way of understanding the problem.
There's tons of v. manipulative things that don't rely on lying. There's also lots of lying not done by states. When you set the problem up as disinfo, there's really not very much you can do.

FIMI instead focuses on specific bad actors using a tradecraft of...
interference and manipulation that *may* involve lying, but loads of other things too, from false identities to manipulating search engines.

Focussing on FIMI lets us focus on these bad actors. There are money flows, orgs, technical infrastructure there
Read 4 tweets
Nov 14, 2022
Good morning everyone!

Today we're launching, well, not really a new counter-disinfo capability, because we've been building and using it for five years. But a much greater effort to talk about how it works, and what we've learned doing this strange new kind of research.

🧵
First things first: it's called Beam.
It has its own website: beamdisinfo.org
And its own white paper: beamdisinfo.org/wp-content/upl…
Its the shared result of an extremely long-standing collab by us at CASM Tech with @ISDglobal to (sometimes frenetically) defend info spaces.
It's actually quite difficult to explain, which is probably why its taken us so long to do so. There're a number of layers to all of this, which I'll go through now.

Layer 0. The starting point was that we knew that information spaces were under attack in a number of ways
Read 12 tweets
Oct 27, 2022
1/ 'Disinformation' isn't primarily about being lured in by falsehoods. It's about entirely parallel epistemic worlds that have been build, variously rejecting mainstream journalism, scientific method, academia, politics. Just like ours, it has investigators, controversies...
2/ ... intellectual trends and fads. These can be conspiracists, extremist political mobilisations, anti-vax, Qanon - there's a tangle and they often link up.

And just like ours, these worlds deeply change the people who live in them. It often severs
3/ the relationships they have with people from the outside. It obviously demolishes trust in mainstream centres of authority. It changes who they see themselves to be as well, and the reasons they get up in the morning.

There is currently a deep sense of confusion about how
Read 6 tweets
Oct 21, 2022
1/ Counter-disinformation is now blooming into a full industry - it reminds me of the early days of 'preventing violent extremism' in the UK, when I first started my career.

Just like PVE in the early days however, it's extremely unclear what actually works. A few thoughts:
2/The problem begins (maybe ends?) with the use of machine learning in social media research. Which - especially by the new start ups - is waved around in a vague way to justify some sort of proprietary algorithm they've built but can't reveal because it's their key IP.
Actually digging into the white papers of these companies often reveals interesting work. But is it based on ML? Almost always not - or nothing beyond a Twitter model. The key method is usually OSINT.

Observation 1. ML is what they're using for sales and raising. OSINT is...
Read 8 tweets
Oct 17, 2022
New research from us is out today! It's on information warfare on Wikipedia about the invasion of Ukraine

This is our first foray from CASM/@ISDglobal to try to find on Wikipedia what we track across social media: coordinated, covert attempts to manipulate our info ecologies
The subject: suspicious editing behaviour on the English-language Wikipedia page for the Russo-Ukrainian war.

It's freely available here: files.casmtechnology.com/information-wa…

A brief discussion below where i'll go through the paper's main points
The first thing I should say is that it isn't a smoking gun. We haven't attributed suspicious editing activity directly to the Russian state. We were never going to be able to do that.

Instead, the paper tries to 2 two things. First, we talk to wikipedians, editors, researchers
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(