Carl Miller Profile picture
Author - https://t.co/Q077M92RKm Co-founder - https://t.co/1GFiwVvAYT Co-founder - https://t.co/hAOpKyhiBn Me - https://t.co/YIibyjZ3vQ
Dame Chris🌟🇺🇦😷 #RejoinEU #FBPE #GTTO🔶️ Profile picture @maraf@ Profile picture Ruben Chagaray Profile picture folamma Profile picture 4 subscribed
Aug 21, 2023 6 tweets 2 min read
*Thrilled* to announce that I and colleagues been awarded one of the grants by OpenAI to try to democratise the development of artificial intelligence. The idea is called The Recursive Public, a quick thread on it below and an invitation to be part of it. The idea is to use a consensus-seeking process pioneered in Taiwan to host an online deliberation. The mission is simple: to set the agenda. What're the things we really need to think about in order to control AI?

recursivepublic.com
Feb 9, 2023 4 tweets 1 min read
1/The EU is now talking about 'foreign information manipulation and interference'. Despite the fact it creates the acronym FIMI, this is a huge step forwards. I'll briefly give my view on why.

First, disinformation is a horrible way of understanding the problem. There's tons of v. manipulative things that don't rely on lying. There's also lots of lying not done by states. When you set the problem up as disinfo, there's really not very much you can do.

FIMI instead focuses on specific bad actors using a tradecraft of...
Nov 14, 2022 12 tweets 4 min read
Good morning everyone!

Today we're launching, well, not really a new counter-disinfo capability, because we've been building and using it for five years. But a much greater effort to talk about how it works, and what we've learned doing this strange new kind of research.

🧵 First things first: it's called Beam.
It has its own website: beamdisinfo.org
And its own white paper: beamdisinfo.org/wp-content/upl…
Its the shared result of an extremely long-standing collab by us at CASM Tech with @ISDglobal to (sometimes frenetically) defend info spaces.
Oct 27, 2022 6 tweets 1 min read
1/ 'Disinformation' isn't primarily about being lured in by falsehoods. It's about entirely parallel epistemic worlds that have been build, variously rejecting mainstream journalism, scientific method, academia, politics. Just like ours, it has investigators, controversies... 2/ ... intellectual trends and fads. These can be conspiracists, extremist political mobilisations, anti-vax, Qanon - there's a tangle and they often link up.

And just like ours, these worlds deeply change the people who live in them. It often severs
Oct 21, 2022 8 tweets 2 min read
1/ Counter-disinformation is now blooming into a full industry - it reminds me of the early days of 'preventing violent extremism' in the UK, when I first started my career.

Just like PVE in the early days however, it's extremely unclear what actually works. A few thoughts: 2/The problem begins (maybe ends?) with the use of machine learning in social media research. Which - especially by the new start ups - is waved around in a vague way to justify some sort of proprietary algorithm they've built but can't reveal because it's their key IP.
Oct 17, 2022 14 tweets 4 min read
New research from us is out today! It's on information warfare on Wikipedia about the invasion of Ukraine

This is our first foray from CASM/@ISDglobal to try to find on Wikipedia what we track across social media: coordinated, covert attempts to manipulate our info ecologies The subject: suspicious editing behaviour on the English-language Wikipedia page for the Russo-Ukrainian war.

It's freely available here: files.casmtechnology.com/information-wa…

A brief discussion below where i'll go through the paper's main points
May 18, 2022 5 tweets 1 min read
VCs are pushing a whole generation of new counter-disinfo start-ups into a familiar mould:

some magical proprietary artificial intelligence algorithm that can identify illicit activity as a service.

VC love this business model - it can scale super fast. It just doesn't work I've spent ten years now digging around in social media research and i'm lucky enough to have some unbelievably capable NLP colleagues. If there's one thing i've learned, it's that there's no universal, fully automated way of catching IO or disinfo.
May 18, 2022 4 tweets 2 min read
This is cool - the @TheEconomist followed up on our research looking at accounts pumping pro-invasion hashtags in March. Just came out.

They went further with their own analysis. Two things struck me most.

(1) Possible impact. 🧵

economist.com/graphic-detail… "The suspicious accounts succeeded in injecting these views into online conversation.... they seem to be winning converts. After suspicious accounts posted pro-Russian content, the share of their followers’ tweets favouring Russia also tended to rise."
May 10, 2022 5 tweets 2 min read
The BBC Disinformation Team @julianagragnani @MedhaviArora Seraj Ali has just released a deep dive on the 9.9k accounts our research looked at back in March that had shared pro-invasion hashtags five or more times

What they've found is fascinating🧵

bbc.co.uk/news/blogs-tre… If you remember, we found distinct language/national clusters that neither purported to be Western accounts, nor addressed Western audiences. Some of the clusters (red, beige, dark green, blue) had especially spam-like characteristics. casmtechnology.com/case-studies/d…
Apr 6, 2022 4 tweets 1 min read
1/ Four quick predictions for where influence operations are going, that I've just made at the @G7 #FactsvsDisinformation conference.

(1) Strategic evolution. We've become used to campaigns targeting Western audiences, using wedge issues to hasten social division. Especially related to Ukraine, we're seeing BRICS countries being addressed, and v.contested information environments in India, South Africa, Indonesia, Malaysia.

(2) From bots to virtual agents. NLP will be used to automate convincing online conversations.
Apr 4, 2022 8 tweets 2 min read
A while back, I put together some principles for how to keep yourself safe (and sane) in an online world alit with information warfare and manipulation.

As these things sadly tend to get more rather than less relevant, I thought to share them below in a thread. I hope useful. 1. Guard against outrage

Activating outrage is the easiest way to manipulate you. It is present in literally every info warfare campaign I've ever analysed. When you become angry, you make others angry as well - both your friends and opponents. Guard against it.
Mar 22, 2022 7 tweets 2 min read
There is, rightly I think, a lot of concern about false amplification and where it might be happening and what it might affect. A lot of people will then immediately jump to the idea of 'bots', so I want to do a quick thread to disentangle this particular form of manipulation. The reasons for false amplification btw are manifold. It can be to make something trend (which gives it visibility), to just to make something appear more popular or highly shared than it is - which can exploit the way we often conflate those metrics with authority.
Mar 18, 2022 27 tweets 12 min read
When we say Kyiv is winning the information war, far too often we only mean information spaces we inhabit.

Pulling apart the most obvious RU info op to date (as we did using semantic modelling), very clear it is targeting BRICS, Africa, Asia. Not the West really at all. Image This is the kind of thing this network shares by the way. Mainly an amplification network pumping a small number of viral pro-invasion meme, largely around themes of western hypocrisy, NATO expansionism and BRICS solidarity, ImageImageImageImage
Aug 14, 2020 7 tweets 2 min read
1/ Myself and @ChloeColliver2 @ISDglobal have been doing a lot of thinking over the past few months. It really feels like we're losing the battle against disinformation, online manipulation, info ops. We're drowning in it.

How do we get ahead?

isdglobal.org/wp-content/upl… 2/ We've come up with an argument for how civic society needs to strategically respond in a big way to this problem which menaces basically every important issue that we care about, from democratic elections to climate change to human rights and social justice.
Jun 29, 2020 7 tweets 2 min read
1/ My colleagues @ISDglobal have just released an amazing, intricate, detailed and painstaking study into a very weird world: a sprawling online empire of health disinformation and conspiracy theories called 'Natural News'.

A few thoughts below

isdglobal.org/wp-content/upl… 2/ The whole operation relies on industrialised domain registration. They've knocked up almost 500 since 1996 - with most happening in 2015. It's a cloud of new, old, defunct and lively sites that has constantly been changing.
May 4, 2020 8 tweets 5 min read
1/A thread sharing more numbers from our investigation into the scale and size of online social organisation underneath the #COVIDー19 infodemic.

To benchmark everything, since Jan, posts to the WHO website got 6.2M interactions on FB. CDC got 6.4M

EpochTimes got 48M... 2/ Heres more from the same Newsguard's list. There were 34 websites overall, and together we measured 80M interactions on posts linking to them since Jan.

TheMindUnleashed.com: 8.36M
RedStateWatcher.com: 7.62M
WND.com: 5.48M

(...)
May 3, 2020 6 tweets 2 min read
1/ The most surprising numbers from a new investigation out this weekend I did with the BBC and @ISDglobal:

Posts linking to The Epoch Times - a website I'd only dimly heard of - have received 48M interactions since Jan.

Over the same time period:

The CDC: 6.4M
WHO: 6.2M 2/ Now, The Epoch Times has had advertising banned by Facebook and has been alleged of running 'inauthentic networks' by both Twitter and FB. Newsguard has claimed it has shared information about corona virus that is 'materially false'.
May 2, 2020 9 tweets 4 min read
1/ Our @BBCClick co-investigation airs today. Working with @ISDglobal, we looked at how both extremist political and fringe medical communities have tried to exploit the pandemic online.

Check it out: bbc.co.uk/iplayer/episod…

And a thread on what we found below. 2/ First: the scale. We measured 80 million interactions on posts on Facebook since Jan on posts containing links to 34 websites listed by Newsguard that had shared info about COVID that was ‘materially false’.

To give you a baseline, WHO and CDC was 6M each over same time.
Oct 21, 2019 8 tweets 2 min read
I've written seven rules to resist online manipulation. A thread explaining each, below:

1. Guard against outrage
2. Don't passive scroll
3. Actively find info
4. Slow down online.
5. Don't trust metrics
6. Use non-online sources
7. Spend your attention wisely. 1. Activating outrage is the easiest way to manipulate you. It is present in literally every info warfare campaign I've ever analysed. When you become angry, you make others angry as well - both your friends and opponents. Guard against it.
Oct 20, 2019 8 tweets 2 min read
1/ I've come up with seven rules for how to protect yourself from online information warfare. A thread on what they are:

1. The information that wants to find you usually isn't the information you want to find. Actively search out the information you need from the Internet... 2. Related to this - BEWARE the passive scroll. You make yourself prey to the curations and algorithmic ranking of content that can be easily gamed and manipulated.
Oct 6, 2019 6 tweets 2 min read
I'm tweeting this weekend about our investigation into possible state-based, systematic editing of Wikipedia for geopolitics bbc.co.uk/news/technolog…

A thread on the most important issue it raises: in a world of info warfare, how do you protect platforms outside of big tech? We've seen info warfare break out across Twitter, FB, Insta, YouTube, VK - basically almost any platform that can be used as a channel for influence.

Our reaction: chastise the tech giants. Embarrass them, criticise them, get them to react.