OK here's my quick and dirty gloss on the Peterson v. Youtube case from the CJEU today. 1/
curia.europa.eu/juris/document…
I see this case as having three big questions:
(1) When can a platform be liable for (c) infringement under substantive (c) law?
(2) When is platform immunized from such infringement by eCommerce safe harbors?
(3) What injunctions can issue even if platform is not liable?

2/
Question (1) -- when can a platform be liable for (c) infringement under substantive (c) law -- turns on evolving case law about communication to the public.
Bottom line: There are a lot of ways to wind up liable here, so the safe harbor issue in Question (2) matters a ton.
3/
I suspect there is some retconning here as the court explains the meaning of the GS Media holding as regards platform liability for communication to the public... But I'm not close enough to the case law to be sure. 4/
As an aside, I sure don't miss working on copyright all the time. The recitation of facts in this case is giving me bad flashbacks to 2008. 5/
OK, question (2) -- about hosting service immunity under eCommerce Directive Art. 14 -- is the meat of the opinion as far as I'm concerned. It's pretty good, and usefully clarifies a bunch of things. 6/
To create knowledge and remove immunity, a notice must contain enough info to show infringement even without "detailed legal examination" by the platform, and **establish that removal is compatible with freedom of expression.** 7/
Also: Platforms don't LOSE immunity just because they try to prevent infringement using tools like YouTube's ContentID.
That would be absurd, right?
Well...not according to a lot of plaintiffs and some courts. Glad the CJEU speaks to this clearly. 8/
But... here's the squishiness. Platforms can lose immunity if they contribute "beyond merely making the platform available." So now lower courts get to decide which of a host's activities constitutes "the platform" and what is some feature or activity "beyond" the platform. 9/
If this "what is the platform and what is a feature bolted on to the platform" question sounds familiar over in this corner of Twitter, it's bc we've been talking about it all week regarding the new U.S. competition bills. @marklemley @blakereid 10/
On to Question (3) -- when can courts issue injunctions to non-liable platforms, and how much monitoring can those injunctions require? This is where IMO things get ugly, though the outcome is not surprising. 11/
The CJEU is looking at Germany's Stoererhaftung or "interferer" doctrine, which lets courts issue injunctions to require platforms to *prevent* infringement, even if the platform is not liable on the merits of the (c) claim (Q1 here) or is immunized under Art. 14 (our Q1). 12/
Injunctions requiring platforms to prevent infringement (by monitoring) are OK as long as the platform was first notified about the specific work, says the CJEU. 13/
This continues the idea, also used in the Glawischnig-Piesczek case, that monitoring is not "general" if it only involves looking for particular, specified, infringements. (Or in that case acts of defamation.) 13/
My analysis of that case is here. I had real issues with it. This case is much better in some respects. In particular, it lists all THREE rights that must be considered: the claimants, the platform's, and ALSO users' rights, including expression/info.
academic.oup.com/grurint/articl… 14/
And, like Glawischnig-Piesczek, this clearly refers only to court-issued injunctions. It is not, like the new Art. 17, open season for rightsholders to get monitoring without judicial review. 15/
But as a matter of statutory/directive interpretation, it turns on what I see as a fiction: that if platforms are filtering only specified works, then they are not actively monitoring "all the content uploaded." REALLY? I mean, how else could this possibly work? 16/
Happily for my peace of mind, I am resigned to that statutory fudge. And given Glawischnig-Piesczek and other things, this outcome is not surprising. I just hope Member State courts get serious about the fundamental rights analysis. 17/
The fundamental rights analysis should require asking how the ACTUAL filter being demanded will work in real life, what unintended consequences it will have, how that affects users' rights to privacy, freedom from discrimination, right to remedy, expression/info rights. 18/
That's my take, gotta go feed kids breakfast now, thanks for listening. 19/19
Oops this should say “our Q2” for that second one.
Not sure why this threading isn’t working, but here are the remaining six tweets:
An important point that came up from @javierpallero: YouTube recommendations happen to be part of the facts of this case and the court, without analysis, treated them as part of the platform, so they’re immunized.
For now. I guess lower courts can find new relevant facts…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Daphne Keller

Daphne Keller Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @daphnehk

21 Jun
It’s 2027. You’re a growing US platform startup, considering international growth. UK’s Online Safety bill has become law, and so has the EU’s DSA. So you know initial compliance costs are steep in both markets, and both have future regulatory risk.
Do you launch in:
If you responded, is your answer based on knowing something about:

(Wish I could cross-reference w answers to first poll...)
OK last one. The EU DSA has serious extraterritorial reach (think GDPR) and fines up to 6% of annual turnover. UK's Online Safety law has even broader territorial reach, and 10% fines.
For the region where you aren't yet in compliance, do you:
Read 4 tweets
21 May
I finally hit on the perfect term for the interoperability issues around sharing friends' data: Other People's Privacy (OPP).
So bummed I hadn't thought of that in time for this event.
You better believe I'd be telling you about OPP™ right now if a few people hadn't beaten me to it. Surprisingly few, though.
And before you ask, it's for a different class of goods and services than that other OPP.
Read 4 tweets
19 May
Oh good lord. Now they've co-opted the Post Office and turned it into a government surveillance platform.

@glakier, we must reclaim it and restore it to its former glory as and institution!
(Here's a thread on that glory, and on the pieces of a Post Office Platform article I will probably never write. And @glakier knows way more. )
Per the article, the USPS used every stupid, creepy, irresponsible surveillance tool to do pseudo-police work that should never have been their job. And they sold it to Republicans as a way to keep tabs on BLM protestors, and said something else to Democrats.
Read 5 tweets
18 May
The UK Online Harms draft captures contradictions of the platform speech debate in perfect microcosm.

Platforms must take down one legally undefined kind of content ("harmful") while leaving up another ("democratically important").

Have fun with that, guys.
If we could agree on what's "harmful" and what's "democratically important," we would be in a much different place as a society.
But I'm sure Facebook can sort it out.
And if they don't, Ofcom can sort it out and fine them.
It's good to have the inherent contradictions of the last few year's debate forced to the surface like that. Dialectics move fast these days.
Read 4 tweets
12 May
Welcome to the future, where the government reaches out and takes user posts down from platforms directly. No more pretense that the platform is considering their request, exercising judgment, or trying to protect users.
bbc.com/news/technolog…
Removals like these should be tombstoned with state branding. Anyone trying to access the content should see exactly which govt agency took it down.
(As @alexfeerst and I discussed long ago re compliance with the rapid takedown requirements of the Terrorist Content Regulation.)
Direct state-initiated removal from a marketplace is arguably different from such removal for "pure speech" platforms.
Letting state agencies require speech suppression without prior judicial review would be a prior restraint problem in some constitutional / human rights systems.
Read 9 tweets
5 May
Uh... Doesn't Facebook permanently ban people all the time? I would have thought that was normal.
And does this kind of vindicate YouTube?
OMG wait you guys. Once that Florida law is in effect, will the FB Oversight Board decision be nullified (bc the law requires leaving up posts by candidates)?
Or... does the Board have to decide if that law is constitutional in order to determine its own authority??
I LOVE this.
Read 20 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(