1/ Four quick predictions for where influence operations are going, that I've just made at the @G7#FactsvsDisinformation conference.
(1) Strategic evolution. We've become used to campaigns targeting Western audiences, using wedge issues to hasten social division.
Especially related to Ukraine, we're seeing BRICS countries being addressed, and v.contested information environments in India, South Africa, Indonesia, Malaysia.
(2) From bots to virtual agents. NLP will be used to automate convincing online conversations.
(3) From Twitter to Wikipedia: more epistemically online environments will be targeted, often using entryist strategy to influence the online communities that are responsible for them.
(4) From geo-politics to a marketplace of for-profit illicit online services, wrapping together blackhat SEO, paid-to-engage services, hack-and-leaks and so on. Likely more innovation will happen here than in states. States will increasingly buy these services.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A while back, I put together some principles for how to keep yourself safe (and sane) in an online world alit with information warfare and manipulation.
As these things sadly tend to get more rather than less relevant, I thought to share them below in a thread. I hope useful.
1. Guard against outrage
Activating outrage is the easiest way to manipulate you. It is present in literally every info warfare campaign I've ever analysed. When you become angry, you make others angry as well - both your friends and opponents. Guard against it.
2. Beware the passive scroll
Sitting there scrolling through your feed makes you prey to all the gaming and manipulation that targets algorithmic curation. This is one of the ways that illicit/manipulative content makes itself extremely visible.
There is, rightly I think, a lot of concern about false amplification and where it might be happening and what it might affect. A lot of people will then immediately jump to the idea of 'bots', so I want to do a quick thread to disentangle this particular form of manipulation.
The reasons for false amplification btw are manifold. It can be to make something trend (which gives it visibility), to just to make something appear more popular or highly shared than it is - which can exploit the way we often conflate those metrics with authority.
How does it happen? A lot of this exists as commercial offerings in the world of grey- or black-hat spam. You can buy packages of already created accounts in scale from a range of websites. Here's one example, where they're packaged up - 'human mix' 'EN', 'basic' and so on.
When we say Kyiv is winning the information war, far too often we only mean information spaces we inhabit.
Pulling apart the most obvious RU info op to date (as we did using semantic modelling), very clear it is targeting BRICS, Africa, Asia. Not the West really at all.
So there's a lot of media attention on this work, which is great. But i'd like to clarify two things:
(1) Does the data definitively point towards the Russian state? No, that's not what data science can do. Twitter has taken down some of the accounts for 'coordinated...
inauthentic behaviour',but exactly who that is becomes a judgement. Contextually, and in terms of the techniques likely used, my judgement is that it is a pro-Russian, pro-invasion operation;but that's my impression as a researcher who spends their time pulling apart these things
1/ Myself and @ChloeColliver2@ISDglobal have been doing a lot of thinking over the past few months. It really feels like we're losing the battle against disinformation, online manipulation, info ops. We're drowning in it.
2/ We've come up with an argument for how civic society needs to strategically respond in a big way to this problem which menaces basically every important issue that we care about, from democratic elections to climate change to human rights and social justice.
3/ Partly this is down to ramping up our capacity to detect it, constantly and across the board. We need to build modular detection tech that different orgs can plug in and use to protect the public discussions that they care about. It needs to be inter-changeable, shareable.
1/ My colleagues @ISDglobal have just released an amazing, intricate, detailed and painstaking study into a very weird world: a sprawling online empire of health disinformation and conspiracy theories called 'Natural News'.
2/ The whole operation relies on industrialised domain registration. They've knocked up almost 500 since 1996 - with most happening in 2015. It's a cloud of new, old, defunct and lively sites that has constantly been changing.
3/ What are they doing?
The network itself is a grab-bag: health disinformation, prepper ideology, hyper-hyper partisan stuff, and (obviously) conspiracy theories galore. One 'exposes', for instance, a government conspiracy to kidnap children for medical experiments.
1/A thread sharing more numbers from our investigation into the scale and size of online social organisation underneath the #COVIDー19 infodemic.
To benchmark everything, since Jan, posts to the WHO website got 6.2M interactions on FB. CDC got 6.4M
EpochTimes got 48M...
2/ Heres more from the same Newsguard's list. There were 34 websites overall, and together we measured 80M interactions on posts linking to them since Jan.