Hello -- I interrupt the past two weeks of ranting about SCOTUS and #Section230 to bring you this *really freaking important* piece of legal scholarship by @ericgoldman.
This article pissed me off and I hope it pisses you off too. Welcome to Jess after dark🧵
What if told you that there's an emerging popular litigation scheme that involves throwing as many defendants into a complaint as a Plaintiff can think of regardless of cause, jurisdiction, or the basic rules of civil procedure?
(we're talking like hundreds of defendants)
What if I told you that those same plaintiffs don't typically incur additional costs for this throw-defendants-at-the-wall scheme?
In fact what if I told you that plaintiffs are usually rewarded for their chaotic evil behavior?
AND what if I told you that plaintiffs actually have a ton of incentives to engage in the scheme because most of the time, judges never bother to assess improper joinder, service of process, or jurisdiction of every single 'Schedule A' defendant?
often resulting in an ex parte TRO or a straight up default judgment...
Worse, what if I told you that you yourself might be a Schedule A defendant and you just don't know it yet because Judges typically allow plaintiffs to SEPARATELY SEAL THE SCHEDULE A LIST OF DEFENDANTS..............
This is literally litigation via DDOS.
Anyway, you might say, Jess that's ridiculous. We have rules for a reason. Surely, you're exaggerating.
So, the next time you hear someone assert that the U.S. should embrace and adopt EU / UK Internet regulations, bring up this paper as the ultimate display of FUCKERY that is the U.S. litigation system.
EU litigation is child's play by comparison.
I have waited so long to bring this article to y'alls attention. Pls read and not enjoy.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.
The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…
If you're going to talk about me, why not @ me? Are you afraid of my response?
At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.
[the following is my own opinion, not my employer's].
Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.
Pure violence and hatred.
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.
Each of these aspects can involve different parties.
The Generative AI Copyright Disclosure Act of 2024 requires anyone using a dataset to train AI to disclose any copyrighted works in the set to the U.S. Copyright Office to be displayed via a public database. 🧵 schiff.house.gov/imo/media/doc/…
Copyright attaches automatically to any creative works fixed in a tangible medium of expression.
So, pretty much all works used to train an AI system will require disclosures, regardless of fair use considerations.
(btw you don't "train" a dataset but details).
BUT THAT'S NOT ALL!
Datasets are incredibly dynamic, especially when it comes to AI training. So, each time the set is updated in a "significant manner," the notice requirement is triggered.
Yesterday, the Ninth Circuit filed its order in Diep v. Apple. They had me in the first half...
Strong #Section230 ruling regarding Apple's content moderation efforts. Until the Court got to the UCL claims...creating yet another bizarre 230 loophole. sigh. 🧵
Hadona Diep is a cybersecurity professional.
She downloaded an app called "Toast Plus" from Apple's App store thinking it was the "Toast Wallet" for storing cryptocurrency.
It was not the Toast Wallet.
Long after transferring a reasonable sum of crypto to Toast Plus, Diep discovered that her crypto was missing and her account was deleted.
Among other claims, Diep sued Apple under state consumer protection law + negligence for failing to "vet" and remove Toast Plus.
CSM argues that AB 3172 is "only" a statutory damages bill.
But they accidentally said the quiet part out loud: the goal is effectively a prior restraint, forcing online publishers to restrain their protected editorial decisions, if those decisions could "harm" a younger user.
In other words, by levying millions of dollars worth of damages for editorial decisions that could be considered harmful to a child, AB 3172 effectively chills private speech.
That's what it means to "be more careful" when we're talking about private publishers.