In platform content regulation, operational logistics are everything. That’s cropping up, inevitably, as the EU copyright filter wars heat up again this fall. 1/
You may recall that Copyright Directive Article 17 “resolved” the tension between rightsholders’ and users’ rights by simply mandating the impossible: platforms must “prevent” infringing uploads while simultaneously “in no way affect[ing]” legitimate uses. 2/
No filter in the world can actually achieve both those things. Even filter vendors told the Commission that. infojustice.org/archives/41930 3/
And PS filters + human review won’t do a good job either. Content moderators, even with great training, won’t know licensing history or nuances of every national law. And requiring lots and lots of human review = entrenching the incumbents who can afford that. 4/
EU lawmakers passed the buck to the Commission to issue Guidelines clarifying Article 17’s impossible “block infringement and don’t block lawful use” task. That’s what the fight is about now. 5/
Since filters CAN’T perfectly enforce the law, the dispute has inevitably moved to “who wins by default when filters fail?”
The operational logistics of real world content moderation are entering the policy discussion about three years too late.
6/
If it’s not clear if the user is lawfully sharing expression and information, but a filter catches their post, does the post (a) come down or (b) stay up? Are users guilty until proven innocent, or innocent until proven guilty? 7/
Defaults are everything.
DEFAULTS ARE EVERYTHING 9/
There is a pretense that the default answer will be temporary, because later on users or rightsholders will appeal and we’ll all arrive at the right answer. That’s a fantasy. 10/
Research suggests users don't really use counternotice. (Note the DMCA has special barriers, so things might be slightly better in Europe, but these numbers are *really* not encouraging.) 11/ cyberlaw.stanford.edu/blog/2017/10/c…
Anecdata suggests that the users who do appeal are often not the ones with a legit claims. (Ask people who work in Trust and Safety) 12/
Human rights advocates say appeals for *uploaders* can't address the right to receive information for listeners, like orgs that gather evidence of human rights abuses on the Internet. 13/ blog.witness.org/2019/01/witnes…
Anyhow, here are the latest sallies in the fight over who wins by default when the filters fail.

Rightsholders say: if it *might* be infringing, take it down and let the users appeal. 14/ politico.eu/wp-content/upl…
User rights groups say no, if it *might* be infringing that’s not reason enough to take it down. 15/
politico.eu/wp-content/upl…
Pretending that automation could solve these problems got us into this mess. EU lawmakers considering filter mandates in the Terrorist Content Regulation trilogues, please take note! 16/16

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daphne Keller

Daphne Keller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @daphnehk

9 Sep
The contemporary equivalent of the Military Industrial Complex is the nexus of state and private platform power. We need a catchy phrase for that, and it needs sustained scrutiny. Things are getting ugly.
Thread 1/
The number of recent news stories about platform employees doing things to make governments happy, including disclosing user data and silencing (or declining to silence) online speech, is remarkable. And what gets reported is presumably just the tip of a very large iceberg. 2/
This @Wired piece about the Saudi royal family bribing Twitter employees to reveal logs data about dissidents and critics is mind-blowing. 3/ wired.com/story/mohammed…
Read 12 tweets
18 Aug
The postal service is America’s original all-access speech platform. Long may it thrive. washingtonpost.com/outlook/2020/0…
It would be really fun to research and write about the postal service as a model of platform regulation. I even have some notes on point. But I’ll probably never get around to it, and have not systematically researched. So here are a few thoughts / some free association. 2/
There’s con law about when and how the govt can make the USPS exclude content. In Lamont, the S Ct struck down a law requiring it to block foreign propaganda mailings by default. Recipients could then opt IN to receive the mail, putting their name on a govt list to do so. 3/
Read 15 tweets
3 Aug
Everyone should be paying WAY more attention to GIFCT, the database platforms use to share information about terrorist content (or things that met someone’s definition of terrorist content, which is part of the issue). Here comes a thread. 1/
This is a good moment to pay attention because GIFCT is in the middle of overhauling its internal governance, and a lot of civil society groups are very vocally disappointed by the direction it’s taking. See this hrw.org/news/2020/07/3… or this blog.witness.org/2020/07/witnes… 2/
GIFCT is a big deal both because of what it specifically does (set rules for “violent extremist” speech, an important and very contested category) and because it is a model for future semi-private Internet governance. 3/
Read 29 tweets
25 Jun
A law regulating platform amplification of user content IS a law regulating speech. Smart people are wasting time having discussions that assume that problem away. 1/
You can’t escape First Amendment barriers (or international human rights law) just by shifting focus from “harmful content” to “the amplification of harmful content.” 2/
Most discussions I hear about this focus on making platforms demote / not amplify harmful content that is still, for better or worse, constitutionally protected speech. A law like that would have two First Amendment problems. 3/
Read 11 tweets
19 Jun
Germany is requiring platforms to report users to the police if they are suspected of saying something illegal. The DOJ Section 230 Report says we should have the same dystopian rule here in the US. 1/
DOJ version: make 230 immunity contingent on platforms reporting users to the police if the platform receives an allegation that the user violated federal criminal law. 3/
Read 18 tweets
14 May
It's interesting how the eCommerce Directive expects different degrees of engagement with potentially unlawful user-generated content from entities at different layers of the Internet's technical stack. (Art 12, 13, or 14)
As I read Art 12, access providers only have to "terminate or prevent an infringement" if *the provider itself* is ordered to do so by a court or administrative authority.
Art 13 caching providers can also be ordered to take action themselves. In addition, they must remove if they know *the initial source* of the info removed it, or was ordered to do so.
So effectively, they should respect court orders issued to third parties.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!