Working on a response to Ted Postol's new Khan Sheikhoun crater theory, it blows my mind he thinks it's formed by a 122mm warhead, there's multiple examples of 122mm impacts that look nothing close to the Khan Sheikhoun crater. Examples from Mariupol 2015 shown below:
Left is what Ted Postol claims was caused by a warhead from a 122mm Grad; Right is an actual 122mm Grad warhead impact
And just to be 100% clear, here's Postol stating it was caused by a 122mm warhead. Can anyone find a single image of a 122mm warhead strike that's comparable to the Khan Sheikhoun crater?
We have lots of videos from the aftermath of the Grad rocket attack on Mariupol in 2015 showing multiple craters on multiple surfaces, none of which look anything like the size and depth of the Khan Sheikhoun crater drive.google.com/drive/folders/…
If you've any other examples of craters from Grad rockets, send them over, the more the merrier.
I have to wonder if Postol just didn't bother looking at real world examples of Grad rocket impacts before concluding his simulation was correct. Seems like a basic thing to overlook.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
🧵 1/7: The European Court of Human Rights has ruled in favor of Russian NGOs and media groups (including @Bellingcat), declaring Russia's "foreign agent" legislation a violation of fundamental human rights. The court found that the law imposes undue restrictions on freedom of expression & association.
2/7: The law requires NGOs & individuals receiving foreign funds to register as “foreign agents,” facing stigma, harsh reporting requirements, and severe penalties. This label implies foreign control—without proof—and misleads the public
3/7: The Court noted that the "foreign agent" label, linked to spies & traitors, damages the reputation of those designated and leads to a chilling effect on civil society and public discourse.
It's currently 9:11am, this post has 3 views, and no retweets or likes on an account with 75 followers. Let's see how long it takes for it to get several hundred retweets, and a few tens of thousands of views.
In the last 15 minutes, that tweet just gained 15.7k views, 187 likes, with no retweets. Two other tweets with similarly fake stores, posted around the same time, with similar profiles, have also suddenly gain a couple of hundred likes and around the same amount of views. This is, in real time, how a Russian disinformation campaign is using Twitter to promote its fake stories.
The thing is, nearly all of this engagement, apart from about 10 views and none of the likes, are entirely inauthentic. This doesn't help them reach genuine audiences, it's just boosting their stats so when they report back to their paymasters they can tell them how many views, likes and retweets they got, but they're all fake. It's effectively the people running these campaigns scamming their paymasters to make them think it's working, when it's not at all.
A new fake Bellingcat story, from a fake video claiming to be from Fox News. What's interesting about this one is I viewed the tweet 10 minutes ago, and it had 5 views, and suddenly it jumped to 12.5k, then 16.2k views in less than 5 minutes, with zero retweets or likes.
To me this suggests there's a bot network being used to boost views of tweets used in this disinformation campaign.
In 90 seconds this tweet just gained 154 retweets, another sign of bot activity.
It's clear this is a coordinated attack from pro-Orban media which they really don't want being noticed outside of Hungary, but what they don't seem to realise is I'm now going to use what they did at every presentation I do on disinformation to audiences across the world.
What's notable is the accusations made against Bellingcat were all taken (uncredited) from an article publishing by MintPress claiming we've loads of intelligence agents working for us, which even the original MintPress article fails to prove.
Which to me just means I get to add a couple more slides to the presentation I'll be doing about this, to audiences made up of exactly the sort of people they didn't want to find out about this.
State actors see alternative media ecosystems as a vehicle for promoting their agendas, and take advantage of that by not just covertly funding them, but also giving them access to their officials and platforming them at places like the UN.
A recent example of that is Jackson Hinkle going to Eastern Ukraine, then getting invited to the UN by Russia to speak at a press conference, and that footage being used by state media as evidence of "experts" rejecting the "mainstream narratives" on Ukraine.
A lack of transparency around the funding of the individuals and websites that are part of these alternative media ecosystems allows for state actors to get away with their covert influence, a clear example of which we've seen over the last 24 hours.
🧵 Important investigation by @mariannaspring on how social media algorithms push harmful content to young users. This connects closely to my research on online radicalisation. Let me explain how.
What happened to Cai in this article is a clear example of how online radicalisation often begins. It starts with seemingly harmless content and quickly escalates because algorithms prioritize engagement over user safety.
Social media platforms use algorithms designed to keep users engaged by feeding them engaging, and sensational, content. This means a teenager watching a few neutral videos can suddenly find themselves immersed in more extreme or harmful material.