Hello and welcome to my live reading of the recent SCOTUS #Section230 cases. Since Gonzalez was rightfully vacated, this thread will focus on the Twitter v. Taamneh opinion. Let's dig in:
I greatly appreciate the Court's framing of recommendations as a tool for online publishers to organize and curate information, no different from the choices an offline newspaper makes as to content it displays on its front page. This is the crux of the 1A issue.
Setting the stage a bit for those that are new. The overarching Q in this case is whether Twitter "aided and abetted" the ISIS terrorists that carried out their attack in Turkey. Here is the framework we're working off of (from Halberstam):
And importantly, the Court notes that the offline issues present in Halberstam are imprecise when applied to online publishers. The Court will rightfully adjust the framework:
Court notes the consequences of broad interpretation, here. Any bystander at a crime scene could be charged with aiding and abetting the crime itself.
This will be crucial for Twitter. Remember: we're similarly talking about Twitter's inaction (failure to remove).
See also:
And again we see the Court reject a rigid analysis of Halberstam as applied to these facts, suggesting that this was the 9th's fatal error.
Here is the test for aid/abet that the Court will rely on going forward:
KEY POINT: Taamneh argued that JASTA applies when a defendant generally aids/abets terrorism (i.e. it's enough to show that Twitter generally permits some ISIS content on the service).
Twitter argues that the claim must be tied to a specific act of terrorism. The Court agrees.
See also (Taamneh's fatal error was contending that Twitter could be generally liable for any and all bad acts stemming from ISIS....):
um...Prodigy / Compuserve redux?
But note, there are indeed limits:
Okay, so we understand the rules. Let's dig into the application:
Important note: Taamneh never alleged that the specific Turkey attacks were actually coordinated on any of the social media services at issue in the case:
All good points, but just a note that even if Twitter did pre-screen content, #Section230 would still apply as to civil liability for that content:
wow
WIN FOR ALGORITHMIC CURATION ***
***though I will note this graf makes me a little nervous when it comes to the implications -- is the Court suggesting algo curation is fine if done neutrally? This could get wonky as applied to gen AI...
(reminder, I'm live tweeting so ya'll are just getting my stream of consciousness atm. Actual analysis to follow later).
bingo:
Right -- again, think of what Twitter stands to lose in aligning with ISIS.
SUPER IMPORTANT
Plaintiffs have been trying to end run 1A / 230 by arguing that Internet services owe a duty to their users to act. The Court decisively shuts that argument down.
That's a huge win against these looming frivolous failure to warn cases...
wow
Court outlines examples of where services could be in trouble. I'm worried about the liability for conscious selection / promotion. This could get tricky when it comes to promoting terrorist content from human rights orgs for example. For 230, conscious selection doesn't matter.
hello i'm back sorry -- reporter calls......
crucial:
9th really screwed up. As I've said, these cases had no business being in front of SCOTUS.
We have decades of precedence that affirm 230 protects algo curation. And here, SCOTUS unanimously determines that Taamneh failed to even state a claim. Kinda embarrassing for the 9th.
Again -- come on Ninth Cir...
Good point though I do remain slightly concerned about the implications for future tort based claims against Internet services. Plaintiffs are really going to hone in on the neutrality args (imo).
If we must go to discovery just to figure out how attenuated the relationship is between the service and the tort, then 230 is pretty much moot.
So, this is all to say that the lower courts need to remember that none of this is new. Nothing changes wrt intermediary liability.
TY for following my live read!
To sum:
-- this was a pretty clean win overall for online publishers like G and Twitter;
-- no problematic 230 dicta;
-- some concerns WRT neutrality and common carriage and how that will affect Gen AI and TX/FL cases;
-- victory lap well earned
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today, the Supreme Court announced their opinion in Murthy v. Missouri.
This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵 supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.
Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.
I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.
In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.
As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."
Turns out, like my Supreme Court predictions, I was only slightly off...
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.
The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…
If you're going to talk about me, why not @ me? Are you afraid of my response?
At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.
[the following is my own opinion, not my employer's].
Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.
Pure violence and hatred.
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.
Each of these aspects can involve different parties.