🧵 Important investigation by @mariannaspring on how social media algorithms push harmful content to young users. This connects closely to my research on online radicalisation. Let me explain how.
What happened to Cai in this article is a clear example of how online radicalisation often begins. It starts with seemingly harmless content and quickly escalates because algorithms prioritize engagement over user safety.
Social media platforms use algorithms designed to keep users engaged by feeding them engaging, and sensational, content. This means a teenager watching a few neutral videos can suddenly find themselves immersed in more extreme or harmful material.
The issue isn’t just about the content itself—it’s about the pathways these algorithms create. Once a user engages with content that the algorithm finds "interesting," it starts recommending similar or even more extreme material, regardless of whether it’s safe or harmful.
This is where the concept of the "pyramid of radicalisation" comes into play. At the base of the pyramid, you have the widest number of users who are exposed to mildly provocative or neutral content. A percentage of these users will engage with more extreme content as they encounter it.
This is where the concept of the "pyramid of radicalisation" comes into play. At the base of the pyramid, you have the widest number of users who are exposed to mildly provocative or neutral content. A percentage of these users will engage with more extreme content as they encounter it.
The danger is that those at the tip of the pyramid may feel compelled to take real-world action, which can include violence. Imagine someone who starts by googling "are vaccines safe" and ends up burning down 5G towers because they think they’ll activate the microchips Bill Gates supposedly put in vaccines. This is the kind of real-world impact that can result from online radicalisation.
But it’s not just the content that leads to radicalization—it’s also the communities that form around this content. As users consume more extreme material, they often find themselves in online groups that reinforce and amplify these views, creating a powerful feedback loop.
These communities provide a sense of belonging and validation, which can be very appealing, especially to those who feel alienated or distrustful of mainstream narratives. This is where radicalisation really takes hold—through community and interaction, not just through content.
The algorithms act as gatekeepers, deciding what content and communities users are exposed to. In doing so, they can unintentionally guide young users down paths toward more radical thinking by constantly feeding them content that fuels outrage or excitement.
Cai's experience, as detailed in the article, is a perfect example of this phenomenon. He started with harmless videos and soon found himself overwhelmed by violent, misogynistic content, leading to increased exposure to harmful ideologies.
To tackle this, we need a comprehensive approach: better transparency around how algorithms work, more effective content moderation, and education that empowers users to critically evaluate the content they see online.
Radicalisation online isn’t just about the content—it’s about the digital environment that platforms create, where extreme views can spread unchecked. This is why understanding and addressing the role of algorithms and online communities is so important.
Ultimately, it’s about building resilience and awareness. We need to teach young people—and all users—how to navigate digital spaces critically and recognize when they are being manipulated by algorithms designed for engagement, not safety.
Thanks to @mariannaspring for shedding light on this critical issue. It’s up to all of us to ensure social media platforms prioritise user well-being over engagement metrics
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Thread on a disinformation campaign involving celebrities. Last week, the Bellingcat contact email received this message, directing us to look at a videos, titled "Olympics Has Fallen 2", voiced by @elonmusk himself. So we dug into it, because it was fishy as hell.
Firstly, Olympics Has Fallen 1 was a fake Netflix documentary, using an AI generated @TomCruise voice over, which sought to undermine the Olympics and France's hosting of the Olympics, suspected to be linked to Russian proxies cyberscoop.com/russia-tom-cru…
@TomCruise The email that was sent to Bellingcat including a QR Code that (after checking for any security issues) let us to a Telegram channel dedicated to the documentary. It was active from July 3rd 2023 to July 6th 2023, then from June 24th 2024 promoting the sequel.
Just read Greg Palast's piece on RFK Jr. and it’s a clear example of how personal trauma and distrust in authorities can lead someone down the path of disinformation and conspiracy theories. This isn’t just an isolated case, it’s a pattern we see again and again.
In my work at Bellingcat, we’ve seen how the erosion of trust in mainstream sources often pushes individuals to seek alternative narratives. For RFK Jr., this loss of trust seems to have opened the door to a range of conspiracy theories.
The shift from being a healthy skeptic to embracing baseless conspiracy theories reflects the journey of many “true believers” in their journey through conspiracy land. They start by questioning official stories and end up rejecting all evidence that contradicts their beliefs.
For some reason, @elonmusk's X is now claiming the link to this article about Russia bombing a children's hospital is unsafe, because apparently it has "been identified by X or our partners as being potentially spammy or unsafe". Unsafe to who, Putin? 🤔
Seeing @elonmusk already thinks Bellingcat is a "psy-op" you have to wonder if this is deliberate censorship from the so proclaimed free speech absolutist. Based on his recent behaviour I guess that only counts when you're a far right grifter posting CSAM.
Here's more evidence of Russia's involvement in the bombing of the children's hospital, but look at it before @elonmusk blocks that too
🧵 I think what's key to answering this question is recognising that how we encounter and consume information has changed dramatically over the last 15 years, and this has particularly impacted Gen Z and Gen Alpha
We’ve shifted from a top-down, gatekeeped model of information consumption (for example getting our news from newspapers and TV news), to a more peer-to-peer relationship with information thanks to social media.
That also changes how we interact with information, where we’re more active participants in that flow of information, in both how we respond to it, and how we share it. Importantly, we have to understand that for Gen Z and Gen Alpha this is their default state.
Reposting this to make the point clearer, this is a real image that is being dismissed as AI because of a crap AI detection website that doesn't actually work, because AI gives people a permission structure to deny reality. A video of the incident is here aljazeera.com/program/newsfe…
It's not the first time I've seen someone do this with Israel and Gaza. Bad AI detection tools are used to deny reality, but ultimately people who don't want to believe something is true will just dismiss it as AI generated anyway.
It's no different from calling every video from Gaza Pallywood, or claiming the White Helmets fake videos in Syria. It's just propagandists creating a permission structure to deny reality.
🧵 I've been digging into this, and it's pretty clear that part of this campaign against Graham Phillips is driven by an article on a fake news website that appears to be run by John Mark Dougan himself.
I'm not sure where this all started, but the first time the spat appears to have gone public is this post by Graham Phillips about John Mark Dougan, stating "Dougan is accused of having 'gone rogue', and suspected of having taken money from western agencies." t.me/grahamwphillip…
The following day, John Mark Dougan posted this now deleted Telegram post, making various allegations and linking to an article on a site called "ukpoliticking", published on the same day. t.me/BadVolfNews/16…