Back in 2018, @FrankPasquale published "Tech Platforms and the Knowledge Problem," in which he proposed a taxonomy of tech reformers: some of us are "Jeffersonians" and others are "Hamiltonians" (in 2018, this was a *very* zeitgeisty taxonomy!).
* Hamiltonian: "improving the regulation of leading firms rather than breaking them up"
* Jeffersonian: "The very concentration (of power, patents, and profits) in megafirms” is itself a problem, making them both unaccountable and dangerous. 2/
In a new article for @EFF, I make the case for Jeffersonian theories of content moderation, or, as the title has it: "To Make Social Media Work Better, Make It Fail Better."
Let me start by saying that Big Tech platforms suck at moderation and do a lot of things wrong. We helped develop the #SantaClaraPrinciples, which lay out concrete steps that platforms could and should take to improve their moderation:
But even if they do all that, they'll still suck, because they've set themselves an impossible task. Facebook says it can moderate conversations in 1,000 languages and 100+ countries. That's an offensively stupid claim to make. 5/
Communities are partly defined by their speech norms. Some words are considered slurs by some communities and not by others - and some communities may only consider a word a slur if it's used by outsiders, but not members of the group. 6/
That means that moderators - possibly relying on machine translations from a language they don't speak - have to figure out not just whether a word is acceptable or not, but also whether the speaker is a bona fide member of the community in its eyes. 7/
This is how you get the familiar parade of moderation horrors, which @mmasnick documents thoroughly on @Techdirt:
* Black users' discussions of anti-Black racism is removed for being anti-Black racism:
So why do so many of us feel like we have to stay on these platforms whose judgment we don't trust? Two words: "switching costs." That's an economics term for everything you give up when you quit a product or service, like a social media platform.
Tech has historically benefited from low switching costs. The flexibility of digital makes it much easier to, say, plug a new carrier into the phone system or open a Microsof Word file with Apple's Pages than it is to get a Kitchenaid mixer to accept a Cuisinart attachment. 13/
But while the platforms struggle to create *technological* barriers to #interoperability, they've created many *legal* barriers: patent, copyright, anti-circumvention, copyright, cybersecurity and more. 14/
These are the barriers that prevent co-ops, nonprofits, startups and others from creating products that blast holes in Facebook's walled garden - services that let you leave Facebook without severing contact with the friends, communities and customers who stay behind. 15/
High switching costs have kept the #fediverse from taking off: people hate Facebook, but they love their communities, and so long as the latter is greater than the former, they won't jump ship. 16/
Interoperability would end that balancing act: you could leave Facebook and stay connected, eating your cake and having it, too.
So while there's room for improvement in how Big Tech moderates, that's just part of the story. 17/
The advantage of giving people the technological self-determination to move to communities where the norms of what is allowed and what is banned is that it makes the inevitable Big Tech mistakes *less important*. 18/
ETA - If you'd like an unrolled version of this thread to read or share, here's a link to it on pluralistic.net, my surveillance-free, ad-free, tracker-free blog:
A couple seated on an 1886-model bicycle for two. The South Portico of the White House, Washington, D.C., in the background. mckitterick.tumblr.com/post/679488800…
In fall 2020, Facebook went to war against Ad Observatory, a NYU crowdsourcing project where users capture paid political ads through a browser plugin that santizes them of personal info and uploads them so disinformation researchers can analyze them.
Facebook's attacks were truly shameless. They told easily disproved lies (for example, claiming that the plugin gathered sensitive personal data, despite publicly available, audited source-code that proved this was absolute bullshit). 2/
Why was Facebook so desperate to prevent a watchdog from auditing its political ads? Well, the company had promised to curb the rampant paid political disinformation on its platform as part of a settlement with regulators. 3/