Eliot Higgins Profile picture
Sep 2 • 15 tweets • 3 min read • Read on X
🧵 Important investigation by @mariannaspring on how social media algorithms push harmful content to young users. This connects closely to my research on online radicalisation. Let me explain how.
What happened to Cai in this article is a clear example of how online radicalisation often begins. It starts with seemingly harmless content and quickly escalates because algorithms prioritize engagement over user safety.
Social media platforms use algorithms designed to keep users engaged by feeding them engaging, and sensational, content. This means a teenager watching a few neutral videos can suddenly find themselves immersed in more extreme or harmful material.
The issue isn’t just about the content itself—it’s about the pathways these algorithms create. Once a user engages with content that the algorithm finds "interesting," it starts recommending similar or even more extreme material, regardless of whether it’s safe or harmful.
This is where the concept of the "pyramid of radicalisation" comes into play. At the base of the pyramid, you have the widest number of users who are exposed to mildly provocative or neutral content. A percentage of these users will engage with more extreme content as they encounter it.
This is where the concept of the "pyramid of radicalisation" comes into play. At the base of the pyramid, you have the widest number of users who are exposed to mildly provocative or neutral content. A percentage of these users will engage with more extreme content as they encounter it.
The danger is that those at the tip of the pyramid may feel compelled to take real-world action, which can include violence. Imagine someone who starts by googling "are vaccines safe" and ends up burning down 5G towers because they think they’ll activate the microchips Bill Gates supposedly put in vaccines. This is the kind of real-world impact that can result from online radicalisation.
But it’s not just the content that leads to radicalization—it’s also the communities that form around this content. As users consume more extreme material, they often find themselves in online groups that reinforce and amplify these views, creating a powerful feedback loop.
These communities provide a sense of belonging and validation, which can be very appealing, especially to those who feel alienated or distrustful of mainstream narratives. This is where radicalisation really takes hold—through community and interaction, not just through content.
The algorithms act as gatekeepers, deciding what content and communities users are exposed to. In doing so, they can unintentionally guide young users down paths toward more radical thinking by constantly feeding them content that fuels outrage or excitement.
Cai's experience, as detailed in the article, is a perfect example of this phenomenon. He started with harmless videos and soon found himself overwhelmed by violent, misogynistic content, leading to increased exposure to harmful ideologies.
To tackle this, we need a comprehensive approach: better transparency around how algorithms work, more effective content moderation, and education that empowers users to critically evaluate the content they see online.
Radicalisation online isn’t just about the content—it’s about the digital environment that platforms create, where extreme views can spread unchecked. This is why understanding and addressing the role of algorithms and online communities is so important.
Ultimately, it’s about building resilience and awareness. We need to teach young people—and all users—how to navigate digital spaces critically and recognize when they are being manipulated by algorithms designed for engagement, not safety.
Thanks to @mariannaspring for shedding light on this critical issue. It’s up to all of us to ensure social media platforms prioritise user well-being over engagement metrics

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Eliot Higgins

Eliot Higgins Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @EliotHiggins

Aug 26
Thread on a disinformation campaign involving celebrities. Last week, the Bellingcat contact email received this message, directing us to look at a videos, titled "Olympics Has Fallen 2", voiced by @elonmusk himself. So we dug into it, because it was fishy as hell. Image
Firstly, Olympics Has Fallen 1 was a fake Netflix documentary, using an AI generated @TomCruise voice over, which sought to undermine the Olympics and France's hosting of the Olympics, suspected to be linked to Russian proxies cyberscoop.com/russia-tom-cru…
@TomCruise The email that was sent to Bellingcat including a QR Code that (after checking for any security issues) let us to a Telegram channel dedicated to the documentary. It was active from July 3rd 2023 to July 6th 2023, then from June 24th 2024 promoting the sequel.
Read 13 tweets
Aug 24
Just read Greg Palast's piece on RFK Jr. and it’s a clear example of how personal trauma and distrust in authorities can lead someone down the path of disinformation and conspiracy theories. This isn’t just an isolated case, it’s a pattern we see again and again.
In my work at Bellingcat, we’ve seen how the erosion of trust in mainstream sources often pushes individuals to seek alternative narratives. For RFK Jr., this loss of trust seems to have opened the door to a range of conspiracy theories.
The shift from being a healthy skeptic to embracing baseless conspiracy theories reflects the journey of many “true believers” in their journey through conspiracy land. They start by questioning official stories and end up rejecting all evidence that contradicts their beliefs.
Read 9 tweets
Jul 9
For some reason, @elonmusk's X is now claiming the link to this article about Russia bombing a children's hospital is unsafe, because apparently it has "been identified by X or our partners as being potentially spammy or unsafe". Unsafe to who, Putin? 🤔
Image
Seeing @elonmusk already thinks Bellingcat is a "psy-op" you have to wonder if this is deliberate censorship from the so proclaimed free speech absolutist. Based on his recent behaviour I guess that only counts when you're a far right grifter posting CSAM.
Here's more evidence of Russia's involvement in the bombing of the children's hospital, but look at it before @elonmusk blocks that too
Read 5 tweets
Jun 7
🧵 I think what's key to answering this question is recognising that how we encounter and consume information has changed dramatically over the last 15 years, and this has particularly impacted Gen Z and Gen Alpha
We’ve shifted from a top-down, gatekeeped model of information consumption (for example getting our news from newspapers and TV news), to a more peer-to-peer relationship with information thanks to social media.
That also changes how we interact with information, where we’re more active participants in that flow of information, in both how we respond to it, and how we share it. Importantly, we have to understand that for Gen Z and Gen Alpha this is their default state.
Read 16 tweets
Jun 6
Reposting this to make the point clearer, this is a real image that is being dismissed as AI because of a crap AI detection website that doesn't actually work, because AI gives people a permission structure to deny reality. A video of the incident is here aljazeera.com/program/newsfe…

Image
It's not the first time I've seen someone do this with Israel and Gaza. Bad AI detection tools are used to deny reality, but ultimately people who don't want to believe something is true will just dismiss it as AI generated anyway.
It's no different from calling every video from Gaza Pallywood, or claiming the White Helmets fake videos in Syria. It's just propagandists creating a permission structure to deny reality.
Read 4 tweets
Jun 2
🧵 I've been digging into this, and it's pretty clear that part of this campaign against Graham Phillips is driven by an article on a fake news website that appears to be run by John Mark Dougan himself.
I'm not sure where this all started, but the first time the spat appears to have gone public is this post by Graham Phillips about John Mark Dougan, stating "Dougan is accused of having 'gone rogue', and suspected of having taken money from western agencies." t.me/grahamwphillip…Image
The following day, John Mark Dougan posted this now deleted Telegram post, making various allegations and linking to an article on a site called "ukpoliticking", published on the same day. t.me/BadVolfNews/16…
Image
Read 14 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(