A new fake Bellingcat story, from a fake video claiming to be from Fox News. What's interesting about this one is I viewed the tweet 10 minutes ago, and it had 5 views, and suddenly it jumped to 12.5k, then 16.2k views in less than 5 minutes, with zero retweets or likes.
To me this suggests there's a bot network being used to boost views of tweets used in this disinformation campaign.
In 90 seconds this tweet just gained 154 retweets, another sign of bot activity.
The accounts retweeting the post are all clearly part of a bot network, probably one hired to do the task, rather than it being run by some Russian state proxy.
You can look at the other posts from the bots, and find more examples of disinformation videos being retweeted. This example has a Trump video where anti-Ukrainian images and captions have been added to the original video where Trump doesn't mention Ukraine.
The tweet featuring the video has a similar number of views and retweet (and 0 likes) as the first video in this thread, and uses a "Verifie" QR code like other videos featured in the recent posts of the disinformation campaign.
Another example from the same bot network, using a fake verified QR Code. If it's so easy for me to find this bot network, clearly attempting to influence the US election, then why is @elonmusk and his team unable to do anything about it?
Here's another one being pushed by the network, with another fake verification QR code. Note, these are all from the last 48 hours, so it's not just one or two videos here or there, but multiple videos a day.
Another fake video from the Russian disinformation network from September 18th, where @jamieoliver is recommended European chefs learn to cook from radioactive ash. Note the similar number of views and retweets.
I'm also noticing a lot of these bots retweeting other political tweets, like this one that was retweeted by multiple bots in the network.
Here's another one from the network, from yesterday. There's basically overlapping groups of bots, so if you find one disinformation tweet and start looking through the list of who retweeted it you'll eventually find a new video, where you can start the process again.
Here's another one, again from the last 24 hours. The guy who thought up the idea of putting a "verified" QR code on these posts probably didn't think it would just make it easier to spot them.
More fake news posts from bots this morning, guess someone processed that purchase order.
I've had a quick look at the accounts retweeting this video, which led to another fake video, the retweets of which led to another, and then a fourth. There's probably more, and today's themes are P.Diddy, Joe Biden sleeping with Brooke Shields, and the collapse of the USA.
There appears to be waves of these videos being produced. I guess that's when whoever is running the bot network their hiring processes their order. It's really blatant, the only reason it's not more visible is that pretty much all the retweets are from other bots, who only follow each other.
Before @elonmusk took over Twitter there was a lot more accessibility to the Twitter API, so researchers had less barriers to map these networks out, but since he took over that's gone away, so it allows these networks to continue to operate freely.
@elonmusk Twitter also was a lot more responsive to those organisations and individuals who identified bot networks, but that also went away with Musk's takeover, so it's just a lot harder to deal with these networks now, all thanks to Musk terrible decision making.
@elonmusk I'm sure there's many more posts like this, and they could be mapped out, but it's pretty apparent Twitter currently has neither the interest or ability to do that, given it's so blatant.
These posts are primarily about election interference, and while they're having no impact, it still requires a response and is evidence that foreign actors are trying to influence the US election. If @elonmusk was serious about fighting bots on this website it's a pretty easy place to start.
I have to also wonder if this bot engagement is more about boosting stats that that can be reported back to their paymasters as a reflection of successful campaign, rather than trying to create authentic engagement, because they're clearly failing at that. It's all about KPIs.
Something else I've picked on looking at all these bots, there's certain posts that seem to appear more often across the whole network, suggesting whoever is selling the use of the bot network has different tiers of bot engagement depending on how much you pay.
Russian bots sharing fake videos about the US election are now using a completely made up Bellingcat Verified QR Code.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
It's been brought to my attention that there's videos published on social media claiming I've made various statements about the US election, related to election integrity. These are part of a Russian disinformation campaign, and the quotes are fabricated, but it's nice to know the Russians hold the value of my opinions in such high regard.
I've previously discussed other videos in this campaign in the below thread:
🧵 1/7: The European Court of Human Rights has ruled in favor of Russian NGOs and media groups (including @Bellingcat), declaring Russia's "foreign agent" legislation a violation of fundamental human rights. The court found that the law imposes undue restrictions on freedom of expression & association.
2/7: The law requires NGOs & individuals receiving foreign funds to register as “foreign agents,” facing stigma, harsh reporting requirements, and severe penalties. This label implies foreign control—without proof—and misleads the public
3/7: The Court noted that the "foreign agent" label, linked to spies & traitors, damages the reputation of those designated and leads to a chilling effect on civil society and public discourse.
It's currently 9:11am, this post has 3 views, and no retweets or likes on an account with 75 followers. Let's see how long it takes for it to get several hundred retweets, and a few tens of thousands of views.
In the last 15 minutes, that tweet just gained 15.7k views, 187 likes, with no retweets. Two other tweets with similarly fake stores, posted around the same time, with similar profiles, have also suddenly gain a couple of hundred likes and around the same amount of views. This is, in real time, how a Russian disinformation campaign is using Twitter to promote its fake stories.
The thing is, nearly all of this engagement, apart from about 10 views and none of the likes, are entirely inauthentic. This doesn't help them reach genuine audiences, it's just boosting their stats so when they report back to their paymasters they can tell them how many views, likes and retweets they got, but they're all fake. It's effectively the people running these campaigns scamming their paymasters to make them think it's working, when it's not at all.
It's clear this is a coordinated attack from pro-Orban media which they really don't want being noticed outside of Hungary, but what they don't seem to realise is I'm now going to use what they did at every presentation I do on disinformation to audiences across the world.
What's notable is the accusations made against Bellingcat were all taken (uncredited) from an article publishing by MintPress claiming we've loads of intelligence agents working for us, which even the original MintPress article fails to prove.
Which to me just means I get to add a couple more slides to the presentation I'll be doing about this, to audiences made up of exactly the sort of people they didn't want to find out about this.
State actors see alternative media ecosystems as a vehicle for promoting their agendas, and take advantage of that by not just covertly funding them, but also giving them access to their officials and platforming them at places like the UN.
A recent example of that is Jackson Hinkle going to Eastern Ukraine, then getting invited to the UN by Russia to speak at a press conference, and that footage being used by state media as evidence of "experts" rejecting the "mainstream narratives" on Ukraine.
A lack of transparency around the funding of the individuals and websites that are part of these alternative media ecosystems allows for state actors to get away with their covert influence, a clear example of which we've seen over the last 24 hours.
🧵 Important investigation by @mariannaspring on how social media algorithms push harmful content to young users. This connects closely to my research on online radicalisation. Let me explain how.
What happened to Cai in this article is a clear example of how online radicalisation often begins. It starts with seemingly harmless content and quickly escalates because algorithms prioritize engagement over user safety.
Social media platforms use algorithms designed to keep users engaged by feeding them engaging, and sensational, content. This means a teenager watching a few neutral videos can suddenly find themselves immersed in more extreme or harmful material.