Last year, multiple school districts and private plaintiffs across the nation filed complaints against social media services: Google, Meta, Snap, and TikTok.
This order addresses the first wave of complaints from the school districts and individuals.
The "master complaint" combining all of the claims so far is ~ 300 pages asserting 18 claims brought under various state laws on behalf of hundreds of plaintiffs.
For efficiency, the federal court required plaintiffs to identify their top 5 priority claims. 👇
Alleged defects:
Infinite scroll; No screen time limits; Intermittent Variable Rewards; Ephemeral content; Content length; Notifications; Algorithms; Filters; Barriers to deletion; pairing children w/adults; Private chat; Geolocation; No age verification/parental controls.
The plaintiffs allege that each of these defects caused their negative physical, mental, and emotional health outcomes such as anxiety, depression, and self-harm.
Plaintiffs also allege that defendants violated COPPA and the Protect Our Children Act (negligence per se claim).
The social media defendants responded with two motions to dismiss (MTDs).
The first motion argues that the plaintiffs did not properly state each of the 5 claims.
The second motion argues that Section 230 and the First Amendment bar those claims anyway.
🟨The Court partially granted and denied the motions to dismiss for claims 1-4.
🟥The Court denied the Section 230 and 1A defenses for for claim 5, (negligence per se).
⚠️One more thing: the Court splits the order into three parts:
Part 1: Section 230 analysis for the 5 priority claims.
Part 2: First Amendment analysis for the 5 claims.
Part 3: Products liability analysis for the claims that survived parts 1&2.
I will cover Parts 1 & 2.
🧵[PART 1: 230]
Right off the bat, the Court starts w/the point that social media companies are more than "mere" search engines or message boards.
That should signal to the Court that social media services play an *even bigger* editorial role than perhaps other services.
Court provides an overview of 230.
The publisher / speaker assessment is what's most important. Here, the Court heavily draws from CA9's Internet Brands decision, teeing up a distinction between claims that require changes to the content (230), and claims that do not (no 230).
This is important for the Court's 230 application to each "defect." The Court will assign each to one of two categories:
Category 1: fixing this defect would require touching underlying content.
Category 2: fixing this defect would not require touching underlying content.
The Court similarly relies on Lemmon v. Snap (the case having to do with Snap's speed filter) to make the same point.
The Court articulates an imprecise interpretation of the Roommates material contribution exception to 230.
The service will always be "involved" in the "posting and presentation" of third-party content. The Q is whether the service had a hand in developing the content at issue.
[Claims 1 & 3: DEFECTIVE DESIGN]
To support their products liability claims, the Plaintiffs offer several alleged "defects" of the social media services.
(this where the Court starts separating the defects by 230's applicability to each. You'll see this again for 1A).
The Court is not persuaded by either party's "all or nothing" approach to 230 (i.e. 230 either bars the products liability claims or it doesn't). Instead, the Court opts for a "conduct-specific" approach (signaling their effort to categorize each defect).
1. Product Defects Not Barred by 230 (the claims that survive)
According to the Court, failure to provide age verification, offering filters that augment content, failing to label said content, and notifications having to do with "defendant's content" (??) are not barred by 230.
If you're struggling to wrap your head around the distinctions here, the analysis for each "defect" is confusing and scant. Undoubtedly, all of these activities have to do with the services' editorial roles in hosting 3P content for its audiences. The Court doesn't agree.
Using the Court's "does it touch the content" test, a few of these activities particularly stand out.
For example, age verification will require services to alter their content offerings and algorithms to ensure that users are only exposed to age appropriate content.
This is also just judicial gymnastics.
There is no difference between suggesting that defendants should have used age verification in the Myspace case, versus here where plaintiffs are alleging the same (that social media companies should offer age verification).
Moreover, image filters (think insta photo filters) have everything to do with the underlying third-party content. Filters are a publishing tool offered to users to change the aesthetics of their posts. Similarly, filter disclosures would require services to alter the content.
Notifications are also part of the online publishing toolkit. Services use notifications to alert their users about new occurrences on the services that are driven by users. Hence, the defendant's content vs. the user's content notification distinction is unclear.
2. Product defects barred by 230 (the claims that fail):
According to the Court, the "defects" barred by Section 230 include algorithmic curation and display, infinite scroll, ephemeral content, and notifications about third party content (again, ???).
Again, the analysis is unclear. I'm reading this part as the Court suggesting that 230 doesn't apply to allegations that omitting *opt-outs* is defective.
But 230 does apply if the claim suggests a lack of *defaults* is defective.
In other words: opt-outs don't interfere with content presentation but defaults do.
(it's okay, I'm lost too).
Opt-outs still require an alternative offering (and perhaps augmentation) of content to specific users. So, I'm not sure how this squares with the "does it touch the content" test.
For that same reason, offering opt-outs is also inherently a "publisher" decision.
At least the Court does recognize that algorithmic curation and display is inherently a publishing function.
I'm still not sure though how to square that with the list of defects that are not barred by Section 230 (as each also requires manipulation of user content).
Accordingly, Section 230 also applies to ephemeral content features, private messaging, notifications about third-party content, and algorithmic curation.
Though the journey to get here was pretty weird, this is the right result.
This is also a good final note from the Court regarding algorithmic curation. The Court expressly notes that it doesn't matter whether the service uses algorithms designed to keep users on the platform or just for curation purposes. It's all the same and Section 230 protects it.
That should send a strong signal to future plaintiffs that algorithmic design, without more, won't get them around 230 either.
Curation and display of third-party content is a publishing function regardless of whether its done manually or with algorithms.
[Claims 2, 4: FAILURE TO WARN]
Plaintiffs also allege that the Defendants offered their products and services without providing adequate warnings about the potential harms they can cause. Defendants didn't argue 230. The Court concludes anyway it wouldn't apply.
To me, this seems a lot like the Prop 65 warnings in California. Theoretically, everything and anything could give you cancer, but it also depends on your own personal health circumstances.
The same goes for each individual social media user.
Any type of content could trigger any kind of harm for a user, especially if that user is already pre-disposed to those harms due to existing mental health or other underlying conditions.
Like prop 65, social media companies would have to label everything. But is that helpful?
[Claim 5: NEGLIGENCE PER SE]
Negligence per se arises out of statutory violations. Here, Plaintiffs allege that Defendants violated COPPA and the Protect Act.
The Court says Section 230 doesn't apply because the acts do not implicate the services' roles as publishers.
My main concern regarding the COPPA claim is that COPPA doesn't mandate age verification measures. It only requires actual knowledge of a specific user where the platform actually knows that that specific user is under 13. I haven't seen that alleged yet.
/END OF PART 1
🧵[PART 2: First Amendment]
We do it all over again.
Defendants argue that in addition to Section 230, the First Amendment bars Plaintiffs' claims to the extent that their claims regard protected speech activities. The Court again prefers the defect-by-defect approach.
The Court also provides a 1A overview👇
[Claims 1 & 3: DEFECTIVE DESIGN]
The Court assesses the claims that were not barred by Section 230: lack of parental controls, lack of voluntary time restrictions, making it hard to delete accounts, not using age verification, not implementing CSAM reporting for non-users.
The Court concludes that the remaining defects would not require Defendants to change how or what content they publish to their audiences.
This entirely ignores identical 1A issues highlighted by the California Court in NetChoice v. Bonta, and SCOTUS' holding in Reno v. ACLU.
Re: image filters, the Court doesn't buy Defendants' argument that the filters facilitate user expression. Instead, the Court inappropriately extends the Defendants' "neutral tools" 230 argument to suggest that the filters have no "expressive" functions.
That is ridiculous.
Re: notifications of "defendants' content, this is the only "defect" that fails per 1A. The Court finds no way for Defendants to mitigate without altering when and how much they publish speech.
For 230 purposes, shouldn't notifications be considered a publisher function then?
[Claims 2&4: FAILURE TO WARN]
The Court says the Defendants did not provide a complete 1A defense to the failure to warn claims. Accordingly, the Court denies any motion to dismiss these claims on 1A grounds.
[Claim 5: NEGLIGENCE PER SE]
Defendants also waived 1A defenses to this claim. Accordingly, The Court denies the motion to dismiss this claim on 1A grounds.
/END PART 2
Concluding that 230 and 1A is not a bar to the Plaintiffs' products liability claims, the Court then proceeds to a full analysis of the elements of Claims 1-4 (i.e. whether the plaintiffs properly alleged the claims).
The Court will address Claim 5 in a subsequent order.
It's probably safe to assume that the social media companies will appeal this order. Stay tuned for that.
So, what does this all mean?
For starters, we're once again observing the narrowing of Section 230. Plaintiffs are becoming increasingly successful pleading around Section 230 with products liability / defective design claims. Courts are buying the conduct vs. content args.
At the same time, the Courts are still not buying the algorithmic curation arguments (which is good). For now at least, algorithmic curation and display of content is still well within the scope of Section 230's applicability.
We're also seeing the erosion of First Amendment protections for social media specific publishing functions. The upcoming NetChoice / CCIA cases could rectify this trend (or make it worse...).
Keep an eye on NetChoice v. Bonta which will provide guidance on 1A/age verification.
Overall, the legal landscape for online services dealing with UGC is becoming increasingly complex and hostile.
The best analogy I've seen so far is the current state of Copyright litigation. The more fact-driven these cases become, the more risky they are for upstarts.
The U.S. is racing towards adopting an EU-style Internet regulation model.
But unless we also plan on adopting the EU's loser pays system, our tech sector could be very well be doomed.
Don't forget, there's also a state version with these similar claims proceeding separately in California: threadreaderapp.com/thread/1713585…
@threadreaderapp unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The reality though is that controls like fair use must exist to stop rightsholder orgs (like the RIAA, MPA) from monopolizing ideas and expression that inevitably stifle the independent artists. 🧵
Using existing works to create new, transformative works isn't unique to the AI industry.
It's a principle of creativity that stands to be fundamentally destroyed should the copyright monopolists get their way here.
If simply being inspired or influenced by existing works amounts to infringement, then the ability to create based on our own experiences and learnings about the world will no longer be possible.
There will always exist arguments that some artist's work was inspired by another.
There's obviously a ton of AI / copyright buzz these days, primarily dominated by the following topics:
1. Fair use for model training 2. Output infringement 3. Ownership of AI generated works
But what about secondary liability for the providers of #GenAI tools?
I imagine there will come a time when courts assign direct liability to users (or prompt engineers) who supply AI generators with prompts driven to:
(1) compromise training sets (if the service doesn't properly sanitize inputs); and or
(2) generate blatantly infringing outputs
And when that inevitably happens, there will be a question as to whether the provider of the Gen AI tool is contributorily / vicariously liable (inducement aside for now). What then?
I've seen arguments that because the AI isn't human, it can't also be an infringer.
I'm reading the transcript of the recently argued Lindke v. Freed SCOTUS case.
This case tees up an important Internet law issue for the Court: when does a public official's social media activity constitute state action? 🧵acrobat.adobe.com/id/urn:aaid:sc…
Quick Recap: Respondent James Freed is city manager of Port Huron, Michigan where Petitioner Kevin Lindke resides. Freed blocked Lindke from accessing Freed's Facebook page. Lindke argues Freed's action violated his 1A rights. The 6th Cir disagreed.
This case parallels O'Connor-Ratcliff v. Garnier, involving similar facts (govt blocking of citizens). Unlike the 6th Cir, the 9th Cir held that blocking does constitute state action.
Yesterday, @ProgressChamber submitted comments responding to the U.S. Copyright Office's Notice of Inquiry on AI and Copyright.
In sum, we suggest that existing copyright law and fair use principles adequately address the latest advancements in #GenAI.🧵 acrobat.adobe.com/id/urn:aaid:sc…
1. The capabilities of Generative AI can be a foundation for idea formulation and inspiration for artists and creators more generally. Among other things, Gen AI improves content moderation, revolutionizes medical research, enhances education, and bolsters autonomous vehicles.
Indeed, the societal benefits of Gen AI are readily apparent. Policymakers must keep these benefits in mind as they approach regulation. Otherwise, we may never realize the tech's true potential.
Copyright law is one major threat to AI that could deliver that crushing blow.
Breaking: Judge Orrick dismissed most of the claims brought by the artists in Andersen v. Stability AI.
This was an unsurprising result. Just because the technology is "new" doesn't mean we disregard current law. The claims were doomed regardless of AI.🧵acrobat.adobe.com/id/urn:aaid:sc…
The holding isn't precedential. It reaffirms long standing copyright principles:
(1) General pleading is not enough. Plaintiffs must identify specific works that were allegedly infringed.
(2) No infringement for outputs that are not substantially similar to a protected work.
Quick recap:
Andersen et al allege Stability AI infringes works by providing the works to Stable Diffusion for training. They also allege all Stable Diffusion outputs are derivative because the training data consists of protected works.
Note -- the complaint is heavily redacted throughout, making it difficult to opine on some of the claims (such as COPPA). Those facts matter, so we'll have to wait for more details to come out.