Jess Miers 🦝 Profile picture
Oct 15 27 tweets 9 min read Twitter logo Read on Twitter
[CORRECTED: Same thread as yesterday w/first tweet edited. This is not the school district cases]

🚨 CA Court rejects #Section230 and 1A defenses in numerous social media addiction cases. Plaintiffs' negligence claims will proceed.🧵
The CA case involves numerous complaints by minors alleging addiction claims. The issues raised here are similar if not identical to the issues raised in the federal school district MDL (ongoing). Same analysis follows.

Order here: acrobat.adobe.com/id/urn:aaid:sc…
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap. Image
The Court focuses on negligence, first comparing the duty of care owed to pedestrians by electric scooter companies to the duties owed by online publishers.

Except, the provisioning of electric scooters != publishing third party speech.
Image
Image
Court also concludes that the harms complained of by the Plaintiffs are plausibly linked to Defendants' algorithmic designs.

Putting aside though the array of external factors at play in any individual minors' life that predispose them to said harms. Image
Court also finds reason to attach moral blame to the Defendant social media services, noting that the services could have opted for child-safety measures such as mandatory age verification;

...a measure that another California Court just recently deemed unlikely to comport w/1A
Image
Image
The Court distinguishes liability for violent television programming from the algorithms used to curate and display third party content online, suggesting 1A shouldn't bar the latter.

The distinction is arbitrary as both regard the delivery of content programming. Image
Court accepts the conduct versus content trope, disregarding that the majority, if not all, of the alleged harms derive entirely from the kind of content displayed to users.

Yet, content curation is acceptable for other mediums? (disregarding Netflix also uses algo curation...)
Image
Image
The Court also accepts that Plaintiffs' proximate causation theory where Plaintiff alleges harms derived from both the usage of TikTok and Instagram, disregarding that both apps are meaningfully different in design and serve distinct purposes and content. Image
It's beyond me how courts are to divide / assign liability for each alleged harm that could be attributed to numerous different algo designs and content across many different online publishers in addition to other external health + environmental factors at play in a users' life.
As for #Section230, the Court relies on Lemmon, concluding that the ways in which social media sites curate and display content, and provide tools for third parties to publish content, is first party behavior having nothing to do with the role of being a publisher / speaker. Image
In reaching that conclusion, the Court uses the following examples, all of which essentially regard the facilitation of third party speech: Tiktok's auto-scroll feature, Snapchat's snap-streaks and filters, push notifications, and the lack of age vetting at account registration.
In cleaving these measures from 230, the Court suggests that none have to do with moderating and publishing third-party content.

Yet in practice, each is central to the facilitation of third-party content. Any harms derive entirely from the availability of that content pool.
The Court also relies on an exception for online publishers that meaningfully manipulate third-party content (e.g. changing the underlying meaning, removing warning labels).

The analogy is imprecise. Online services deliver and display content w/o altering the content itself. Image
The Court adds Section 230(e)(3) permits state negligence claims such as the ones alleged here, within the spirit of Congress' intent.

The conclusion misconstrues the exception and runs directly opposite of Congress' intent to provide a national standard for UGC services. Image
Doubling down, the Court adds that 230 does not apply to the services' own operations, separating the algorithmic curation of content into its own special conduct category.

But the operations are central to 230. The services' conduct towards UGC is in fact the entire point... Image
Cubby and Stratton Oakmont, the case law dilemma 230 was explicitly enacted to resolve, was entirely about the services' "operations" as applied to the third-party speech they host: Hands off curation vs. family friendly moderation.

It has always been about publishing conduct.
The Court also attempts to distinguish Dyroff, noting a difference between harms derived from the content itself versus the publication conduct.

Yet, claims regarding eating disorders can't logically derive from publication measures absent the triggering third party content...
Image
Image
Again the Court buys into an arbitrary decoupling of the underlying content and the publication conduct without more. Image
The Court also rejects Prager, inviting yet another arbitrary distinction within the publishing algorithm itself (i.e. rote algorithmic recommendations vs. personalized algorithmic recommendations).

In practice, such technological distinction is impractical and illogical. Image
Turning to 1A, the Court pushes the Gonzalez theory that content curation algorithms are more akin to physical book material than the content found in the book itself.

The Court also fails to consider that algorithmic curation and publication are 'expressive activities.' Image
Again the Court pushes the nonsensical theory that addiction to social media can derive from the publication measures alone absent third party content.

At the same time, the Court seems to disregard the same algorithmic curation components at play for Netflix...
Image
Image
The Court also misconstrues the @NetChoice line of cases, suggesting that content moderation only encompasses the removal of content / users.

Of course, the conclusion disregards the inherent moderation function of curation algorithms designed to prioritize high quality content. Image
Lastly, the Court rejects the 1A considerations under Sullivan and Tornillo for the sole reason that publication measures, like auto-scroll, are unlike the traditional publication functions employed by newspapers and broadcasters; an unsophisticated SCOTUS-rejected argument.
Image
Image
The government is explicitly barred from encumbering adult access to legal speech.

Yet, that is the entire thrust of these social media addiction suits which have apparently duped this court.

Stay tuned for the inevitable appeal.
Also, let's be clear, the only "reasonable" alternative here that both the Court and Plaintiffs suggest is mandatory age verification for all users across all platforms for any and all content.

It's always about increased surveillance and censorship.
@threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jess Miers 🦝

Jess Miers 🦝 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jess_miers

Oct 14
🚨 Court rejects #Section230 and 1A defenses in the school district social media lawsuits. Plaintiffs' negligence claims will carry on.🧵
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap. Image
While the govt may have a limited right to restrict the manner of speech in order to protect unwilling viewers of the public, it is expressly forbidden from restricting willing adults accessing legally protected speech.

The latter is the essence of school district suits. Image
Read 27 tweets
Sep 6
Good evening!

Last week, @NetChoice secured an important win in Arkansas, challenging the State's recent age verification law, Act 689.

The opinion is packed with excellent points, including a robust discussion about NetChoice's standing. 🧵 Image
You can read the opinion here: netchoice.org/wp-content/upl…
The Court emphasizes Act 689's failure to reach sites like Parler, Gab, and Truth Social (a recurring problem).

If the intent is truly to protect kids from awful content, why not include the sites responsible for some of the most heinous and hateful content produced online? Image
Read 23 tweets
Aug 16
🚨 California is about to enact yet another blatantly unconstitutional speech law. SB 680 prohibits websites from using a design, algorithm, or feature that causes harm or addiction for any user 16 years old or younger.

The bill will have extraordinary consequences. 🧵 https://t.co/yuybicLGm4
Image
What California is doing isn't new. When states want to control speech, they use one of two justifications for legislation: (1) national security (Montana TikTok ban); or (2) kids' safety.

The underlying goal is all the same: restrict expression and access to information.
SB 680's enactment will come just WEEKS after Judge Freeman grilled California for their equally unconstitutional AADC legislation.

Judge Freeman didn't buy the State's 'conduct not content' argument then. It's baffling the State is trying it again now.
Read 27 tweets
Aug 14
This afternoon, the DOJ filed their brief advising SCOTUS to grant cert in the @NetChoice and @ccianet speech cases against Texas and Florida.

Two key takeaways: (1) The Texas and Florida laws violate 1A; (2) the mandatory disclosures may not
supremecourt.gov/DocketPDF/22/2…
It's been a while so let's recap:

Florida enacted SB 7072 in May 2021. The law creates content moderation restrictions on social media companies, prohibiting them from engaging in certain moderation activities for certain users and topics (e.g. political candidates).
SB 7072 also mandates certain disclosures about the companies' editorial practices. The platforms must also provide an individualized explanation to a user if it removes or alters their posts.

11th Cir held that the content mod provisions violate 1A but the disclosures do not.
Read 30 tweets
Aug 7
One of the reasons @ericgoldman is widely considered a thought leader in this space is his keen ability to anticipate and predict the next iteration of tech law.

This casebook update is a huge deal. The changes reflect the next wave of practice. Lawyers: take notice.
Internet law is taught so differently throughout the nation. One thing I've always particularly respected about @ericgoldman's curriculum is that it's so practical and fundamentals-focused that passing tech fads almost never necessitate their own updates.
So, when a major curriculum update like this one occurs, I pay attention.
Read 9 tweets
Aug 1
Yesterday we filed an amicus brief in support of App Stores, developers, and consumers, urging the Ninth Circuit to affirm #Section230 protections for in-app payment processing.

The alternative would cause chaos for financial privacy/security, and harm the creator economy.
Plaintiffs in this case are relying primarily on a loophole from the HomeAway case which abridged 230 protections for "transactions" involved with the underlying content at issue (i.e. illegal home sharing listings).

The same result would hose small app developers.
In-app payment processing is core to app revenue for creators and the app marketplace. Holding App Stores liable for providing their in-app payment tools to developers is a surefire way to discourage in-app payments generally.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(