Mark Ledwich Profile picture
Aug 27, 2019 12 tweets 3 min read Read on X
1/ The “Auditing Radicalization Pathways” study is worthwhile looking at. arxiv.org/abs/1908.08313

They use comments to correlate users pathways across time. This is clever and it's a great way to get at the true movement of viewers between channels. However...
2/ I read the paper and come away baffled about why their study is considered the evidence for supporting the rabbit-hole/alt-right-radicalization narrative. Let me show you the important parts
3/ This shows that IDW exclusive-commentators are more likely to comment on alt-right channels in the future than the control.
4/ @JeffreyASachs called this 4% movement (vs 1% of control) a “high % of people”. And author @manoelribeiro
Called this “consistent migration” to the alt-right. Decide for yourself if that is fair.

I do believe this result corresponds to reality tho...
5/ In my model of an ideological movement, viewers are more likely to migrate to channels/politics close them in the ideological/recommendation landscape.

I have highlighted the large matching channels in our datasets showing the IDW/Alt-right/Control proximity
6/ This doesn't contradict their take. But this study doesn’t look at the obvious other direction of movement that would support or refute the alt-right rabbit hole theory. For example, these movements aren't part of their results: ...
7/

Left -> Far-left: Do mainstream left channels (e.g. Vox, Wired) to “infect” people to far-left youtube.

Alt-right -> IDW: What rate are alt-right users un-”infected” onto IDW channels. Is the movement more in one direction than the other?
8/ Until you can compare these other movements, this is not evidenced either way.

The next interesting result is a random walk. Random walk: Simulate many viewers starting at each of the channel categories, follow recommendations randomly, where do they end up after X steps?
9/ If you start from alt-right, almost 100% of viewers end up watching non-alt-right videos. This shows that the algorithm’s influence. It leads away from the Alt-right towards IDW/Control, the opposite of the way this study is being represented.
10/ Final thoughts. I would dispute some of the channels that have been classified IDW (e.g. Sargon of Akkad), but I don’t think that would change the results. Use of moralistic language like “infection” is unnecessary and is evidence of their bias is leaking into this work
11/ It's a shame, because I really like some of these methods. I'm planning to mimic some of this and fix the problems complaining about.
12/ One more thought. The "missing movement" problem in this paper is glaring. Consider this study: "Planes are problematic because they move people to New Zealand". Imagine only looking at flights into NZ and not even mentioning the flights out.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Mark Ledwich

Mark Ledwich Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @mark_ledwich

Dec 4, 2020
I am one of the authors of this "recently published research". The problem is not with our classifications, it's the way the labels were converted into far-right and far-left.

This paper is actually very good, but is let down by the classifications.

🧵
They classified channels using our old dataset (Ref 23) and ones from the Auditing Radicalization Pathways study (Ref 20).
arxiv.org/pdf/2011.12843…

They translated labels from these studies into their categories in a really strange way. Image
In particular.
They coded the labels IDW/Anti-SJW -> Far Right: We consider purely anti-woke channels "right". But calling this group far-right deserves a reaction.

Conspiracy -> Far-Right: This group is too broad (e.g. 9/11 truthers, and Area 51 believers) to be far-right.
Read 18 tweets
Dec 31, 2019
1/ I found 4 articles on TubeFilter about YouTube recommendations. Every 👏 single 👏 one 👏 is reporting research with the same "fatal flaw" - anonymous data.

let me show you 🧵
2/ Pew report they used anonymous data
3/ Two articles rely on AlgoTransparency. The code shows they don't use cookies (or the many things you would need to implement to make it personalized).

(for those that read code)
github.com/pnbt/youtube-e…
Read 5 tweets
Dec 29, 2019
1/ There are two main criticisms about the study that I should have anticipated better.

a) anon recommendations could be very different from personalized ones
b) the NYT and others were reporting on 2018 radicalization, yet we only analyzed late 2019 recommendations
2/ Anonymous recs (even when averaged out over all users) could have a different influence compared to personalized ones. This is a legit limitation, but one that applies to all of the studies on recs so far.
3/ There are practical reasons that make this extremely difficult - you would need a chrome extension (or equivalent) that captures real recommendations in click-through stats from a representative set of users. I don't plan on doing that.
Read 8 tweets
Dec 28, 2019
1. I worked with Anna Zaitsev (Berkely postdoc) to study YouTube recommendation radicalization. We painstakingly collected and grouped channels (768) and recommendations (23M) and found that the algo has a deradicalizing influence.

Pre-print:
arxiv.org/abs/1912.11211
🧵
2. It turns out the late 2019 algorithm
*DESTROYS* conspiracy theorists, provocateurs and white identitarians
*Helps* partisans
*Hurts* almost everyone else.

👇 compares an estimate of the recommendations presented (grey) to received (green) for each of the groups:
3. Check out recfluence.net to have a play with this new dataset. We also include categorization from @manoelribeiro et a.l. and other studies so you can see some alternative groupings.

All of the code and data is free to review to use github.com/markledwich2/Y…
Read 5 tweets
Jun 30, 2019
The context of carines tweet is an incident with Antifa and @MrAndyNgo today.

This is a video of it

And the photo is the aftermath.

Caroline no longer extends humanity to him, or she is not aware of what happened. Image
@MrAndyNgo Nathan knows what happened, but doesn't think andy can be a victim of serious assault because he is "relentlessly baiting and harassing antifa"

@MrAndyNgo And this reporter calls it snowflakery. He isn't worried about being on the receiving end because his team can commit violence that is not taken seriously by police in Portland. Image
Read 26 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(