Jess Miers 🦝 Profile picture
Oct 15, 2023 27 tweets 9 min read Read on X
[CORRECTED: Same thread as yesterday w/first tweet edited. This is not the school district cases]

🚨 CA Court rejects #Section230 and 1A defenses in numerous social media addiction cases. Plaintiffs' negligence claims will proceed.🧵
The CA case involves numerous complaints by minors alleging addiction claims. The issues raised here are similar if not identical to the issues raised in the federal school district MDL (ongoing). Same analysis follows.

Order here: acrobat.adobe.com/id/urn:aaid:sc…
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap. Image
The Court focuses on negligence, first comparing the duty of care owed to pedestrians by electric scooter companies to the duties owed by online publishers.

Except, the provisioning of electric scooters != publishing third party speech.
Image
Image
Court also concludes that the harms complained of by the Plaintiffs are plausibly linked to Defendants' algorithmic designs.

Putting aside though the array of external factors at play in any individual minors' life that predispose them to said harms. Image
Court also finds reason to attach moral blame to the Defendant social media services, noting that the services could have opted for child-safety measures such as mandatory age verification;

...a measure that another California Court just recently deemed unlikely to comport w/1A
Image
Image
The Court distinguishes liability for violent television programming from the algorithms used to curate and display third party content online, suggesting 1A shouldn't bar the latter.

The distinction is arbitrary as both regard the delivery of content programming. Image
Court accepts the conduct versus content trope, disregarding that the majority, if not all, of the alleged harms derive entirely from the kind of content displayed to users.

Yet, content curation is acceptable for other mediums? (disregarding Netflix also uses algo curation...)
Image
Image
The Court also accepts that Plaintiffs' proximate causation theory where Plaintiff alleges harms derived from both the usage of TikTok and Instagram, disregarding that both apps are meaningfully different in design and serve distinct purposes and content. Image
It's beyond me how courts are to divide / assign liability for each alleged harm that could be attributed to numerous different algo designs and content across many different online publishers in addition to other external health + environmental factors at play in a users' life.
As for #Section230, the Court relies on Lemmon, concluding that the ways in which social media sites curate and display content, and provide tools for third parties to publish content, is first party behavior having nothing to do with the role of being a publisher / speaker. Image
In reaching that conclusion, the Court uses the following examples, all of which essentially regard the facilitation of third party speech: Tiktok's auto-scroll feature, Snapchat's snap-streaks and filters, push notifications, and the lack of age vetting at account registration.
In cleaving these measures from 230, the Court suggests that none have to do with moderating and publishing third-party content.

Yet in practice, each is central to the facilitation of third-party content. Any harms derive entirely from the availability of that content pool.
The Court also relies on an exception for online publishers that meaningfully manipulate third-party content (e.g. changing the underlying meaning, removing warning labels).

The analogy is imprecise. Online services deliver and display content w/o altering the content itself. Image
The Court adds Section 230(e)(3) permits state negligence claims such as the ones alleged here, within the spirit of Congress' intent.

The conclusion misconstrues the exception and runs directly opposite of Congress' intent to provide a national standard for UGC services. Image
Doubling down, the Court adds that 230 does not apply to the services' own operations, separating the algorithmic curation of content into its own special conduct category.

But the operations are central to 230. The services' conduct towards UGC is in fact the entire point... Image
Cubby and Stratton Oakmont, the case law dilemma 230 was explicitly enacted to resolve, was entirely about the services' "operations" as applied to the third-party speech they host: Hands off curation vs. family friendly moderation.

It has always been about publishing conduct.
The Court also attempts to distinguish Dyroff, noting a difference between harms derived from the content itself versus the publication conduct.

Yet, claims regarding eating disorders can't logically derive from publication measures absent the triggering third party content...
Image
Image
Again the Court buys into an arbitrary decoupling of the underlying content and the publication conduct without more. Image
The Court also rejects Prager, inviting yet another arbitrary distinction within the publishing algorithm itself (i.e. rote algorithmic recommendations vs. personalized algorithmic recommendations).

In practice, such technological distinction is impractical and illogical. Image
Turning to 1A, the Court pushes the Gonzalez theory that content curation algorithms are more akin to physical book material than the content found in the book itself.

The Court also fails to consider that algorithmic curation and publication are 'expressive activities.' Image
Again the Court pushes the nonsensical theory that addiction to social media can derive from the publication measures alone absent third party content.

At the same time, the Court seems to disregard the same algorithmic curation components at play for Netflix...
Image
Image
The Court also misconstrues the @NetChoice line of cases, suggesting that content moderation only encompasses the removal of content / users.

Of course, the conclusion disregards the inherent moderation function of curation algorithms designed to prioritize high quality content. Image
Lastly, the Court rejects the 1A considerations under Sullivan and Tornillo for the sole reason that publication measures, like auto-scroll, are unlike the traditional publication functions employed by newspapers and broadcasters; an unsophisticated SCOTUS-rejected argument.
Image
Image
The government is explicitly barred from encumbering adult access to legal speech.

Yet, that is the entire thrust of these social media addiction suits which have apparently duped this court.

Stay tuned for the inevitable appeal.
Also, let's be clear, the only "reasonable" alternative here that both the Court and Plaintiffs suggest is mandatory age verification for all users across all platforms for any and all content.

It's always about increased surveillance and censorship.
@threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jess Miers 🦝

Jess Miers 🦝 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jess_miers

Jul 2
Quotes from yesterday's NetChoice opinion, organized by issue, and what I think they mean for the future of Internet regulation🧵
1. Social media platforms are entitled to First Amendment protections.

"To the extent that social media platforms create expressive products, they receive the First Amendment’s protection." Image
In other words, social media companies are *not* common carriers.

"The principle does not change because the curated compilation has gone from the physical to the virtual world." Image
Read 26 tweets
Jun 26
Today, the Supreme Court announced their opinion in Murthy v. Missouri.

This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵
supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.

Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.

The truth was about as murky as the cure.
Read 34 tweets
Jun 26
📢 Some personal news!

I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.

In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.

As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."

Turns out, like my Supreme Court predictions, I was only slightly off...
Read 11 tweets
May 9
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.

The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…

Image
My post is written for a general audience, so I advise you start there if you're interested in learning as much as possible about the bill.

Other credible experts have chimed in on the bill as well like @psychosort
Read 26 tweets
Apr 15
If you're going to talk about me, why not @ me? Are you afraid of my response?

At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.

Let's look at the receipts 🧵
[the following is my own opinion, not my employer's].

Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.

Pure violence and hatred. Image
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).

This clearly upset @ CreatureDesigns.
Read 25 tweets
Apr 11
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.

IMO 230 could apply to Gen AI for some use cases. techdirt.com/2023/03/17/yes…
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.

Each of these aspects can involve different parties.
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(