The question I get most on the subject of de-platforming racist/fascist/white supremacist types is "If we kick them off mainstream sites, won't they just go to other places and [fill in bad stuff here]?" In this thread I'm going to explain why this is a poor rationale.
First, we need to remember that de-platforming directly undercuts the Bad Guys' ability to achieve their 3 main goals when using social media: Propaganda, Organization, and Trolling/Harassment. Specifically...
De-platforming disrupts propaganda & recruitment. They need to be on big mainstream social media platforms to normalize their ideas & to spread them. By removing Bad Guys from mainstream social media, it is harder for them to appear normal & for normal people to be recruited.
De-platforming disrupts planning & organization. Platforms that have both a propaganda side (public posts) and an organization side (private chats) allow seamless pivot, and this is especially dangerous if the private chats are encrypted (ie, Telegram).
Examples: Facebook Pages = propaganda, Groups/Messenger = organization. Twitter timeline = propaganda, DM = organization. Telegram channels = propaganda, Chats/Messages = organization. You get the idea.
De-platforming from mainstream sites forces the group to use multiple, unfamiliar platforms AND continues to chip away at the facade of normalcy.
De-platforming disrupts harassment/trolling. A major "fun" activity of Bad Guys online is harassing other users, either "normies" or people in some ethnic/religious/etc group they don't like. Removing them from mainstream social media reduces this toxic behavior for everyone.
An argument I hear a lot is "But if they go to other places won't they just be harder to track?" This usually comes from folks that don't actually KNOW how to track these groups on those other platforms so they might be worried that the job will get harder for folks like me.
But that's a poor rationale. Yes, it might make my job harder initially because I'd have to learn a new platform, but I actually love this challenge. It's FUN for me. Also, there are a few things that almost always offset this cost...
(a)Some "Alt" platforms are often EASIER to systematically collect data on (ie, compare Telegram API to Facebook's.. zilch), and (b) The bad guys are not as adept at using Alt platforms either, so they make a lot of mistakes.
(However, I'm also very much watching the slow creation/adoption of uncensorable platforms, distributed web, cryptocurrency, etc. These will necessarily change the disruption discussion in the future away from de-platforming and towards other strategies. But we're not there yet.)
Another poor argument I've heard against de-platforming is the "petri dish" effect: if you kick them off mainstream social media they'll end up going into an petri dish/echo chamber where they will be radicalized faster. But...
...this reasoning discounts the fact that by the time someone gets de-platformed, they're ALREADY engaging in toxic behaviors - they're just doing it with the blessing of a normie platform and hassling the rest of the users there as well.
Anyhow, I hope this thread helps folks understand some of the variables we can use when thinking about de-platforming as a strategy for online safety.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The Epik breach has some security mistakes in it that are so damaging they take my breath away. But I guess this shouldn’t be surprising given Rob Monster’s approach to business. A thread.
Here’s the Epik CEO trying to hire Jack Corbin, aka Daniel McMahon, serial harasser, cyber stalker now sitting in jail.
Here’s him dragging Cloudflare and calling them a honeypot and saying they have bad security. Can’t make this stuff up folks.
It’s time to talk about what happens after de-platforming: Nick Fuentes lasted 5 days after being kicked off Twitter before creating a new account. It has a name mocking sexual assault, groyper avatar, advertises his show, and follows all his other briefly deplatformed orbiters.
De-platformed high status individuals lose their brand and follower count (1k vs 130k in his case) when they come back, but they almost always return shortly after the ban. (They are easily detectable, so why do the platforms let them back?)
Low status individuals can return AND retain their brand - this is when you see username1, username2, ad nauseum. They have to rebuild their followers, but that is straightforward since they had fewer than 1k accounts to begin with.
Finally - here's my accepted paper on the DLive video streaming service and how it's being used by far-right propagandists to earn money (Apr 2020-Jan 2021). Lots of data! Here are some of the largest cash-outs including some post-insurrection refunds arxiv.org/abs/2105.05929
Here is the network of streamers (pink) and donors (gray) - only showing donors who gave at least 10,000 lemons ($120) for visibility, otherwise too much data! You can read more about the different labels in the paper, but A is Groypers, B is MurderTheMedia, etc.
What we learn from the network diagram is that there are lots of separate communities with very little overlap - at least at the high dollar amounts - between them. The biggest donation recipients operate more as "stars" within their own separate fandoms or cliques.
LBRY is one of a new crop of "uncensorable" blockchain-based content hosting sites, unsurprisingly run by a libertarian tech bro. They recently launched a "Youtube killer" called Odysee & set about trying to attract fascists. Yet, they are having some struggles. Let's explore.
First, and most embarrassingly, Odysee is trying to act like they rolled their own livestreaming tech solution. Fact is, they seem to have just hired the programmer for Bitwave (goes by Dispatch/Xander) & are running all their livestreams through the Bitwave[.]tv infrastructure.
You may recall Bitwave as one of the sites that enabled GypsyCrusader (Paul Miller) to harass Omegle users w/ shocking, racist comments while dressed like The Joker - before his gun charges and arrest. Amazing partnership LBRY, wow.
One thing I do to wind down after a long day is open your public source code and check out what your programmer is up to. I de-obfuscate the code, run a few tests, then contact every company involved in the tech stack keeping you online. It’s up to them whether to let you persist
I document everything, send updates to people who are keeping up with all this, and thank everyone profusely who helps keep the internet safe from violence-inspiring, racist antisemites like you. Then I go to bed and do it again the next day.
Sometimes while I’m figuring out the tech you’re experimenting with that day, I see the digital traces of someone else doing the same thing as me and I imagine we kind of wave at each other. Shoutouts to all the people quietly doing the work.
I was just answering a survey with a question about how "Extremist actors use the internet and social media differently than the average user." Here are 6 ways I have observed far-right extremist actors behaving differently online:
1. Harassment of users. Harassment is a key entertainment activity for these people, and can be carried out both on a single platform AND between platforms (i.e. advertising on Telegram for a DLive channel that is livestreaming a Discord raid).
2. Development of specialized vocabulary and memes to spread hate and build camaraderie. Specialized vocab also can be used to skirt content moderation ("joggers", "big luau", etc.) See also Daily Stormer style guide for more examples of how to propagandize via word choice/ tone.