Are there Democrats in Congress who simultaneously (1) want platforms to act against things like electoral and Covid disinformation and (2) support Rep. @davidcicilline's antitrust bill with Sect. 2(a)(3) intact?
I see a serious conflict there.
As I read it, that part of the Cicilline bill opens the door to Infowars, Breitbart, The Daily Stormer et al bringing must-carry claims against platforms, or demanding higher ranking.
Here's what that part of the bill prohibits. The first two are about self-dealing by platforms, which is totally appropriate for antitrust/competition law. The third one opens the floodgates to litigation about speech and content moderation -- and bad outcomes.
I haven't followed these bills closely, so if someone has an explanation that makes this problem go away, I'm all ears. But if the problems is there, it's a big and gnarly one. (Also it opens the law to much stronger 1st Am challenges by platforms.)
I feel like I should flag here for those who don't know that until 2015 I was an Associate General Counsel at Google.
This part of the bill is a bomb waiting to go off, all the same.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I see this case as having three big questions: (1) When can a platform be liable for (c) infringement under substantive (c) law? (2) When is platform immunized from such infringement by eCommerce safe harbors? (3) What injunctions can issue even if platform is not liable?
2/
Question (1) -- when can a platform be liable for (c) infringement under substantive (c) law -- turns on evolving case law about communication to the public.
Bottom line: There are a lot of ways to wind up liable here, so the safe harbor issue in Question (2) matters a ton.
3/
It’s 2027. You’re a growing US platform startup, considering international growth. UK’s Online Safety bill has become law, and so has the EU’s DSA. So you know initial compliance costs are steep in both markets, and both have future regulatory risk.
Do you launch in:
If you responded, is your answer based on knowing something about:
(Wish I could cross-reference w answers to first poll...)
OK last one. The EU DSA has serious extraterritorial reach (think GDPR) and fines up to 6% of annual turnover. UK's Online Safety law has even broader territorial reach, and 10% fines.
For the region where you aren't yet in compliance, do you:
I finally hit on the perfect term for the interoperability issues around sharing friends' data: Other People's Privacy (OPP).
So bummed I hadn't thought of that in time for this event.
Per the article, the USPS used every stupid, creepy, irresponsible surveillance tool to do pseudo-police work that should never have been their job. And they sold it to Republicans as a way to keep tabs on BLM protestors, and said something else to Democrats.
The UK Online Harms draft captures contradictions of the platform speech debate in perfect microcosm.
Platforms must take down one legally undefined kind of content ("harmful") while leaving up another ("democratically important").
Have fun with that, guys.
If we could agree on what's "harmful" and what's "democratically important," we would be in a much different place as a society.
But I'm sure Facebook can sort it out.
And if they don't, Ofcom can sort it out and fine them.
It's good to have the inherent contradictions of the last few year's debate forced to the surface like that. Dialectics move fast these days.
Welcome to the future, where the government reaches out and takes user posts down from platforms directly. No more pretense that the platform is considering their request, exercising judgment, or trying to protect users. bbc.com/news/technolog…
Removals like these should be tombstoned with state branding. Anyone trying to access the content should see exactly which govt agency took it down.
(As @alexfeerst and I discussed long ago re compliance with the rapid takedown requirements of the Terrorist Content Regulation.)
Direct state-initiated removal from a marketplace is arguably different from such removal for "pure speech" platforms.
Letting state agencies require speech suppression without prior judicial review would be a prior restraint problem in some constitutional / human rights systems.