I haven’t spoken much about KF over the years, mostly out of sheer horror and a feeling of hopelessness after their hosting of the manifesto and video of the Christchurch massacre.
I would talk with reporters once in a while about possible angles to cover KF, but it was futile.
The problem, of course, was anyone who spoke of KF was likely going to endure networked harassment, possibly for years on end.
KF isn’t just a website that coordinates harassment. It’s is an infrastructure of evil frequented by people who have a serious amount of technical skill
Mess with KF and you were likely to have your bank account, Uber, phone number, and apps hacked. If they got your home address you’d be flooded with pizza delivery, at best, or worst, swatted by police.
Any business that kept that site alive was complicit.
Now, everyone knows that everyone knows that you do not underestimate an organized crew of trans activists.
While there have been numerous attempts to get KF kicked offline, one critical networked faction was often missing. Bad press and govt pressure wasn’t enough.
Enter @keffals and the #DropKiwifarms campaign, who got cloudflare and more companies to stop services to KF. Popular, organized, and already targeted by KF, Keffals is a Twitch streamer and trans activist who was recently swatted by KF.
Networked harassment campaigns that jump from the wires to the weeds are different than generalized hate campaigns because an individual is harmed and it’s a crime. There’s no way to hide behind claims of free speech, which was the main defense of the corps still serving KF.
Moreover, @keffals had a huge audience that were practiced at swarming to spread specific messages. This command structure of social media can be used for pro-social or anti-social activism, but it only works when someone has a large audience that respects their calls to action.
Mobilization is one of the most crucial features of contemporary social media and is a consequence of the design. It was disengenious of Cloudflare to claim that the pressure campaign exerted by #DropKiwifarms didn’t factor into their decision to drop KF.
There would be no public knowledge of KF, if not for the ‘name and shame’ campaign organized by #DropKiwifarms.
So what’s next?
Typically, deplatforming doesn’t stick unless it’s coupled with demonetization. That requires more vigilance on the part of activists to keep an eye on any attempts to raise $ to bring KF back online.
The removal of KF from the far right media ecosystem is already producing new alliances among influencers. The far-right, like most social movements, is characterized by fractures and fissures across trolls, white supremacists, and misogynists. The owner of KF is reviled by many.
The biggest fear of the far right in this moment is less about defending KF and more about the writing on the wall; i.e. other sites like 4chan will be targeted next.
They fear this activism will obliterate the internet’s hate machine.
So be it.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I am unsure if all things will remain equal on this platform though. If someone does build a stable alternative to twitter, then it’s possible there will be a significant migration. That’s a HUGE if, but could be possible if personal account data was portable and interoperable.
My other thought is that Twitter, fwiw, will further become a terrain of culture wars. With a mandate for “free speech,” the kind of harassment that content moderation tampers will go into overdrive targeting lgbtq groups, women, BIPOC, and anyone else fighting for civil rights.
1. Experts get to decide. (Good explanation but most experts- or at least the really sharp ones- will hedge and say they still need to do more research, so it’s not always so cut and dry. Moreover, experts have a difficult time communicating nuance.)
2. The loudest get to decide. (This is true in the social media age. Those with the biggest audiences who share widely can make things “feel true” through repetition and redundancy. Loudness is facilitated by the design of social media that circumvents traditional gatekeepers.)
We read, write, analyze, and study information flows, not because the technology does something particular, but rather people using technology transforms society.
Our approach to researching misinformation, disinformation, and media manipulation begins by blending the fields of communication, sociology, cybersecurity, alongside science and technology studies. Then, we scope out cases to turn incidents into intelligence.
The case study of the “laptop from hell” discussed in this article is in draft form. We used it in our course to cause discussion about how different news outlets, platform companies, states, and political operatives dismissed, distributed, or deflected specific details.
Facebook whistleblower was on the civic integrity team that was recently dissolved.
What’s weird is that it sure does appear that Facebook employs some folks with a lot of integrity, but the company just doesn’t seem to hear them when they speak up.
Of course, there must be some measure of needing to cover one’s own behavior… but this looks like it implicates so much more than just one team.
Worth disclosing that she’s involved in this, which looks like a consultancy? Maybe a non profit? Hard to say…quaranteam.com/who-we-are
The US-centric criticism of disinformation research is warranted, but neglects the international nature of the field.
One only needs to look at the journalism/research from Philippines, Malaysia, and Taiwan to see how important the field’s corpus of terminology and methods are.
Of course, the US politicians and the power elite turn everything they can to their advantage, not just the concept of disinformation.
When we talk about politics we are really talking about media about politics. Disinformation research is media studies.
When it comes to assessing the role disinformation plays in society, Joe says, “However well-intentioned these professionals are, they don’t have special access to the fabric of reality.”
I disagree. There’s no such thing as a “fabric of reality to begin with. Metaphors fail.
The real story behind health misinformation online is 20 years long.
1. Paywalls prevent accurate information from reaching mass audiences.
2. Antivax activists, like other fringe groups, organized on available tech.
3. Antivax activists spent time sowing doubt to recruit.
4. Recruitment is not someone just saying “I’m antivax,” it’s about adopting a new worldview. (Similar to taking the red pill)
5. One by one or the whole group, the activists used cloaked science and harassment to silence challengers.
6. One of the oldest and most well worn ways to move a wedge issue is to base it on children’s well-being. Covid vaccines complicated that reasoning, but we are in it now.