As Parler disappears from the Android and Ios app stores and faces being kicked off of Amazon's (and other) clouds, people who worry about monopolized corporate control over speech are divided over What It Means.
1/
There's an obvious, trivial point to be made here: Twitter, Apple and Google are private companies. When they remove speech on the basis of its content, it's censorship, but it's not GOVERNMENT censorship. It doesn't violate the First Amendment.
2/
And yes, of course it's censorship. They have made a decision about the type and quality of speech they'll permit, and they enforce that decision using the economic, legal and technical tools at their disposal.
3/
If I invited you to my house for dinner and said, "Just so you know, no one is allowed to talk about racism at the table," it would be censorship. If I said "no one is allowed to say racist things at the table," it would also be censorship.
4/
I censor my daughter when I tell her not to swear. I censor other Twitter users when I hide their replies to my posts. I censor commenters on my blog when I delete their replies.
Dress is up as "content removal" or "moderation" if you'd like, but it's obviously censorship.
5/
That's fine. Different social spaces have different rules and norms. I disagree with some censorship and support other censorship. Some speech is illegal (nonconsensual pornography, specific incitements to violence, child sex abuse material) and the government censors it.
6/
Other speech is distasteful or hateful (slurs, insults) and the proprietors of different speech forums censor it. This legal-but-distasteful speech is a mushy, amorphous category.
7/
I'm totally OK with hilarious dunks on the insurrectionists who stormed the capitol. Tell jokes about Holocaust victims and I'll throw you out of my house or block you.
And when I do, you can go to your house and tell Holocaust jokes.
8/
I'm not gonna lie. I don't like the idea of anyone telling Holocaust jokes anywhere. Or rape jokes. Or racist jokes. But I have made my peace with the fact that there are private spaces where that will happen.
9/
I condemn those spaces and their proprietors, but I don't want them to be outlawed.
Which brings me back to Parler. It's true that no one violates the First Amendment (let alone CDA 230) (get serious) when Parler is removed from app stores or kicked off a cloud.
10/
But we have a duopoly of mobile platforms, an oligopoly of cloud providers, a small conspiracy of payment processors. Their choices about who make speak are hugely consequential, and concerted effort by all of them could make some points of view effectively vanish.
11/
This market concentration didn't occur in a vacuum. These vital sectors of the digital economy became as concentrated as they are due to four decades of shameful, bipartisan neglect of antitrust law.
12/
And while failing to enforce antitrust law doesn't violate the First Amendment, it can still lead to government sanctioned incursions on speech.
The remedy for this isn't forcing the platforms to carry objectionable speech.
13/
The remedy is enforcing antitrust so that the censorship policies of two app stores don't carry the force of law; and it's ending the laws (copyright, cybersecurity, etc) that allow these companies to control who can install what on their devices.
I got into a good discussion of this on a private mailing list this morning and then I adapted them and published them in the public "State of the World 2021" discussion on @TheWELL.
There are three posts: the first deals with Apple and Google's insistence that they removed Parler because it lacked an effective hate-speech filter. Given that there is no such thing as an effective hate-speech filter, this is obvious bullshit.
16/
The second addresses the fundamental problems of moderation at scale, where you are entrusting a large number of employees to enforce policies against "hate speech."
The biggest problem here is that "almost-hate-speech" is emotionally equivalent to "hate speech" for the people it's directed at. If tech companies specify hate speech, trolls will deploy almost-hate-speech (and goad their targets into crossing the line, then narc them out).
18/
And if tech companies tell moderators to nuke bad speech without defining it, the mods will make stupid, terrible mistakes and users will be thrown into the meat-grinder of the stupid, terrible banhammer appeals process.
19/
The final post asks what Apple and Google should do about Parler?
They should remove it, and tell users, "We removed Parler because we think it is a politically odious attempt to foment violence. Our judgment is subjective and may be wielded against others in future. If you don't like our judgment, you shouldn't use our app store."
21/
I'm 100% OK with that: first, because it is honest; and second, because it invites the question, "How do we switch app stores?"
eof/
ETA: Here's an ad-free, surveillance-free blog version of this thread as a permalink:
In 2003's Pattern Recognition, @GreatDismal discusses the role of "apophenia" - finding patterns where none exist - in paranoid thinking. We are a pattern-matching animal, prone to seeing faces in clouds and hearing speech in static.
Apophenia is omnipresent and weird. It's why 5G conspiracy theorists started circulating a guitar-pedal circuit diagram as a leaked 5G cancer-microchip design (the diagram has a segment labeled "5G frequency").
But this kind of hilarious idiocy doesn't occur in a vacuum. It's got a business model. Companies like Devon's @energydots1 prey on people who've been sucked in by their own apophenic misfirings to sell them "Smartdots" - stickers to protect them from "radiation."
3/
After the 9/11 attacks, airlines and public buildings adopted a flurry of "security" measures, like taking away pen-knives from fliers or requiring visitors to office buildings to be photographed or present a driver's license.
1/
Bruce Schneier's seminal 2003 "Beyond Fear" called these measures: #securitytheater.
Schneier pointed out that these measures would be easy to circumvent, and were thus providing only the comforting appearance of security - not security itself.
Security theater is worse than nothing. Security theater gives people the false impression that their risks have been mitigated, when actually things are just as dangerous.
After al, if you know that danger exists, you can take some steps to mitigate or avoid it.
3/
If you've ever argued with a racist Facebook uncle over Thanksgiving dinner, you probably had the fact that the Democrats supported slavery and the Republicans ended it thrown in your face. It's totally true.
1/
What's also true is that the parties underwent a series of "realignments" where their politics were profoundly transformed.
These realignments are a regular feature of two (and even three) party systems.
The thing is, there are more than two ways to think about politics.
2/
Each of the parties is best thought of as coalitions - often fragile ones. The Democrats were a mix of southern racists ("Dixiecrats") and northeastern trade unionists. The Civil Rights Act turned Dixiecrats into Republicans ("We have lost the south for a generation" -LBJ).
3/
The Night of the Short Fingers saw many of the US's largest tech companies blocking Trump and trumpist platforms like Parler, provoking a storm of punditry about What It All Means for the tech companies to have taken this content moderation step.
1/
The best expert I know on the subject is @jilliancyork, my @EFF colleague. She's published "an ongoing list" of "everything pundits are getting wrong about this current moment in content moderation."