I finally read Øe’s Opinion in the CJEU’s pending case about Article 17 filtering/fundamental rights, and it is amazing. Here comes a long thread about what stood out to me. curia.europa.eu/juris/document…
Of course, I don’t like the upshot: Article 17 stands. Øe reconciles that with users' rights by, as @bjjuette and @giuliapriora put it, confining the law in a “tight corset of conditions to safeguard compliance with EU fundamental rights.” copyrightblog.kluweriplaw.com/2021/07/20/on-…
@bjjuette @giuliapriora (That blog post is a great overall explainer of the issue and the Opinion, BTW, with lots of useful links.)
@bjjuette @giuliapriora To reach his conclusion, Øe goes on a remarkable tour of fundamental principles in intermediary liability. He also sets aside a lot of nonsense with an efficiency and brusqueness that will bring joy to realists everywhere. Here's a rough list of highlights.
(1) Yes, for purposes of fundamental rights analysis we need to assume Art 17 is about filtering, not some other hypothetical "innovative solution."
(2) Filtering is functionally a *requirement* even if formally it is called a condition for immunity.
Par. 61-62, 64, 69, 85
(3) Governments can't avoid responsibility for bad platform laws by blaming the platforms.
"[T]he ‘interference’ with the freedom of expression of users is indeed attributable to the EU legislature. It has instigated that interference." Par. 84
This is so clear and powerful: "The legislature cannot delegate such a task and at the same time shift all liability to those providers for the resulting interferences with the fundamental rights of users." Par. 84
(4) The real-world effects of filters (or any other technology mandate) matter. "[I]n order to assess the compatibility of Article 17 of Directive 2019/790 with Article 11 of the Charter, account must be taken not only of its wording, but also of its actual effects.” Par 86
(5) THIS IS HUGE: The prohibition on "general monitoring" in ECD Art. 15 isn't just legislative. It is "a general principle of law governing the Internet, in that it gives practical effect, in the digital environment, to the fundamental freedom of communication." Par. 106
So much flows from this linkage of fundamental rights and filtering mandates. The AG Opinion is rich in analysis of free expression and information rights, filters, and intermediary liability generally.
(5.1) "Filtering entails a significant risk to freedom of expression, namely the risk of ‘over-blocking’ lawful content." Par. 141
(5.2) Corrective procedures, including appeals and reinstatement, "are not sufficient on their own to ensure a ‘fair balance’ between copyright and users’ freedom of expression" after filters remove the wrong content.
Par. 180
"The collateral effect of such filtering is too great to be compatible with that freedom, irrespective of whether injured users have a right of appeal against the blocking of their information." Par. 186
One reason that appeals are not enough to make up for a broken content takedown system is, per the AG, that it "systematically impos[es] the burden of inaction on users," who often will be afraid or insufficiently motivated to appeal. Par. 187
Relying on appeals to correct filters' errors is also inadequate, he continues, because so much online speech is time-sensitive. "Delaying the posting of such content by its systematic blocking ex ante would risk rendering it irrelevant." Par. 188
5.3 To protect users' rights, filters can be required only for content whose illegality is "manifest, without... the need for contextualisation" or that has been adjudicated by a court. Par. 198
This, too, is a really big deal for Art. 17.
5.4 Only copies that are identical or equivalent to a degree that "will not require... service providers to make an ‘independent assessment’ of their lawfulness" can be subject of filtering mandates. Par 202, 205
Also a big deal, and closely tracks Glawischnig-Piesczek
The AG gets pretty deep in the weeds on things like "parameters in content recognition tools which help distinguish between what seems manifest and what is ambiguous." Check out Par. 211 if that's your thing.
5.5 If available filtering technology can't avoid excessive false positives, then filtering is not required under Art 17.7.
Par. 214, 67-68.
5.6 Laws should not require platforms to make hard legal judgments about user speech. They lack "the necessary expertise and, above all, the necessary independence... particularly when they face the threat of heavy liability." Par. 197
That matters for *all* notice and takedown.
6. The conclusion that filtering mandates are inextricable from fundamental rights also has consequences for state institutions.
EU legislators "cannot... leave to the Member States [or] the service providers...the task of establishing such safeguards." Par 151
And Member States can't just skip implementing key protections required by the Directive for user rights. AHEM, France. 162.
That's the really meaty part. But here are some more tidbits that caught my eye.
(a) The legislator "considered that ‘false positives’, consisting of blocking legal content, were more serious than ‘false negatives’, which would mean letting some illegal content through." Par. 207
(b) Regarding that weird last-minute earmark giving copyright-holders additional rights in the Commission Guidance: "I cannot agree with this, unless I alter all the considerations set out in this Opinion.” Par. 223
(c) The AG is refreshingly frank about how the court's statutory interpretation of "general monitoring" has changed over time. He thinks the change is OK, I don't so much. Par. 111-113
(d) An odd little must-carry requirement sneaks in in Par. 163. Platforms cannot *choose* to disregard protections for parody, etc. Nor can they contract with copyright-holders to do so! (This is a thing that has, in fact, happened.) They can still remove on non-(c) grounds.
OK, that's my tour of the AG Opinion on Art 17. I hope this (especially the citations) can be helpful to those who are in positions to take the many great, great nuggets of wisdom in this Opinion and put them to good use.
Fin.
Oh wait wait one more thing on must-carry from the AG Opinion. At one point instead of saying the state can't mandate over-filtering, he says platforms "are not authorised" to over-filter. Odd and interesting... (205)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daphne Keller

Daphne Keller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @daphnehk

30 Jul
Something terrible is happening in Canadian Internet law, and the people who care in the rest of the world are mostly stretched too thin to pay attention. We’re counting on people like @mgeist, @EmilyLaidlaw, @tamir_i, and @vivekdotca to somehow fix it. 1/
@mgeist @EmilyLaidlaw @tamir_i @vivekdotca This is a thread listing some of the law’s problems as identified by @mgeist, and flagging a few resources showing the law’s major human rights problems. Others who know of more that might be useful for those working on this in Canada, please add on. 2/
Many of @mgeist's recent posts are about the rushed and secretive lawmaking process. This latest one lays out the current proposals. michaelgeist.ca/2021/07/online…
3/
Read 26 tweets
20 Jul
Heads up, people who don’t follow GDPR news: This case is a big deal. It’s basically asking the CJEU to rule that FB’s whole ads system violates the GDPR.
My (very speculative) crystal ball says: Expect a ruling that messes up the ads business model at the margins in ways that sort of track real world privacy values and sort of track how the tech works, but that fall short on both fronts in confusing ways.
No disrespect to the CJEU intended here, BTW. The materials they review often lack any well-developed factual record or amicus/intervenor briefs from independent experts or NGOs to explain key legal issues. And then they have to reach a consensus position. That’s a rough set-up.
Read 7 tweets
7 Jul
Trump's de-platforming lawsuit turns on the idea that Twitter and others took down content under pressure from Dem politicians, thus becoming state actors who can be sued under the First Amendment. 1/
That's... not how state action works. But politicians pressuring platforms to take down lawful speech is problematic. This practice is called "jawboning," and @dbambauer wrote a useful article about it. I also discuss it in Who Do You Sue. 2/
The irony (OK, one of many) is that Trump was Jawboner in Chief. He tried his best to strongarm platforms into adopting *his* preferred speech policies. So much so that @CenDemTech sued him. (When the President does stuff, that really is state action & can violate the 1st Am) 3/
Read 4 tweets
1 Jul
A few more musings on the ruling striking down the Florida platform law. storage.courtlistener.com/recap/gov.usco…

1/
(1) Same lesson as the Facebook antitrust ruling earlier this week: Norms and assumptions change faster in the political sphere, but more slowly in courts. Slowing down and setting forth your factual, legal, logical justification matters.
2/
In other words, don’t get high on your own supply (of rhetoric).
3/
Read 13 tweets
30 Jun
Are there Democrats in Congress who simultaneously
(1) want platforms to act against things like electoral and Covid disinformation and
(2) support Rep. @davidcicilline's antitrust bill with Sect. 2(a)(3) intact?

I see a serious conflict there.
As I read it, that part of the Cicilline bill opens the door to Infowars, Breitbart, The Daily Stormer et al bringing must-carry claims against platforms, or demanding higher ranking.
Here's what that part of the bill prohibits. The first two are about self-dealing by platforms, which is totally appropriate for antitrust/competition law. The third one opens the floodgates to litigation about speech and content moderation -- and bad outcomes.
Read 5 tweets
22 Jun
OK here's my quick and dirty gloss on the Peterson v. Youtube case from the CJEU today. 1/
curia.europa.eu/juris/document…
I see this case as having three big questions:
(1) When can a platform be liable for (c) infringement under substantive (c) law?
(2) When is platform immunized from such infringement by eCommerce safe harbors?
(3) What injunctions can issue even if platform is not liable?

2/
Question (1) -- when can a platform be liable for (c) infringement under substantive (c) law -- turns on evolving case law about communication to the public.
Bottom line: There are a lot of ways to wind up liable here, so the safe harbor issue in Question (2) matters a ton.
3/
Read 24 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(