The CA case involves numerous complaints by minors alleging addiction claims. The issues raised here are similar if not identical to the issues raised in the federal school district MDL (ongoing). Same analysis follows.
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap.
The Court focuses on negligence, first comparing the duty of care owed to pedestrians by electric scooter companies to the duties owed by online publishers.
Except, the provisioning of electric scooters != publishing third party speech.
Court also concludes that the harms complained of by the Plaintiffs are plausibly linked to Defendants' algorithmic designs.
Putting aside though the array of external factors at play in any individual minors' life that predispose them to said harms.
Court also finds reason to attach moral blame to the Defendant social media services, noting that the services could have opted for child-safety measures such as mandatory age verification;
...a measure that another California Court just recently deemed unlikely to comport w/1A
The Court distinguishes liability for violent television programming from the algorithms used to curate and display third party content online, suggesting 1A shouldn't bar the latter.
The distinction is arbitrary as both regard the delivery of content programming.
Court accepts the conduct versus content trope, disregarding that the majority, if not all, of the alleged harms derive entirely from the kind of content displayed to users.
Yet, content curation is acceptable for other mediums? (disregarding Netflix also uses algo curation...)
The Court also accepts that Plaintiffs' proximate causation theory where Plaintiff alleges harms derived from both the usage of TikTok and Instagram, disregarding that both apps are meaningfully different in design and serve distinct purposes and content.
It's beyond me how courts are to divide / assign liability for each alleged harm that could be attributed to numerous different algo designs and content across many different online publishers in addition to other external health + environmental factors at play in a users' life.
As for #Section230, the Court relies on Lemmon, concluding that the ways in which social media sites curate and display content, and provide tools for third parties to publish content, is first party behavior having nothing to do with the role of being a publisher / speaker.
In reaching that conclusion, the Court uses the following examples, all of which essentially regard the facilitation of third party speech: Tiktok's auto-scroll feature, Snapchat's snap-streaks and filters, push notifications, and the lack of age vetting at account registration.
In cleaving these measures from 230, the Court suggests that none have to do with moderating and publishing third-party content.
Yet in practice, each is central to the facilitation of third-party content. Any harms derive entirely from the availability of that content pool.
The Court also relies on an exception for online publishers that meaningfully manipulate third-party content (e.g. changing the underlying meaning, removing warning labels).
The analogy is imprecise. Online services deliver and display content w/o altering the content itself.
The Court adds Section 230(e)(3) permits state negligence claims such as the ones alleged here, within the spirit of Congress' intent.
The conclusion misconstrues the exception and runs directly opposite of Congress' intent to provide a national standard for UGC services.
Doubling down, the Court adds that 230 does not apply to the services' own operations, separating the algorithmic curation of content into its own special conduct category.
But the operations are central to 230. The services' conduct towards UGC is in fact the entire point...
Cubby and Stratton Oakmont, the case law dilemma 230 was explicitly enacted to resolve, was entirely about the services' "operations" as applied to the third-party speech they host: Hands off curation vs. family friendly moderation.
It has always been about publishing conduct.
The Court also attempts to distinguish Dyroff, noting a difference between harms derived from the content itself versus the publication conduct.
Yet, claims regarding eating disorders can't logically derive from publication measures absent the triggering third party content...
Again the Court buys into an arbitrary decoupling of the underlying content and the publication conduct without more.
The Court also rejects Prager, inviting yet another arbitrary distinction within the publishing algorithm itself (i.e. rote algorithmic recommendations vs. personalized algorithmic recommendations).
In practice, such technological distinction is impractical and illogical.
Turning to 1A, the Court pushes the Gonzalez theory that content curation algorithms are more akin to physical book material than the content found in the book itself.
The Court also fails to consider that algorithmic curation and publication are 'expressive activities.'
Again the Court pushes the nonsensical theory that addiction to social media can derive from the publication measures alone absent third party content.
At the same time, the Court seems to disregard the same algorithmic curation components at play for Netflix...
The Court also misconstrues the @NetChoice line of cases, suggesting that content moderation only encompasses the removal of content / users.
Of course, the conclusion disregards the inherent moderation function of curation algorithms designed to prioritize high quality content.
Lastly, the Court rejects the 1A considerations under Sullivan and Tornillo for the sole reason that publication measures, like auto-scroll, are unlike the traditional publication functions employed by newspapers and broadcasters; an unsophisticated SCOTUS-rejected argument.
The government is explicitly barred from encumbering adult access to legal speech.
Yet, that is the entire thrust of these social media addiction suits which have apparently duped this court.
Stay tuned for the inevitable appeal.
Also, let's be clear, the only "reasonable" alternative here that both the Court and Plaintiffs suggest is mandatory age verification for all users across all platforms for any and all content.
It's always about increased surveillance and censorship.
@threadreaderapp unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today, the Supreme Court announced their opinion in Murthy v. Missouri.
This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵 supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.
Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.
I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.
In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.
As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."
Turns out, like my Supreme Court predictions, I was only slightly off...
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.
The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…
If you're going to talk about me, why not @ me? Are you afraid of my response?
At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.
[the following is my own opinion, not my employer's].
Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.
Pure violence and hatred.
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.
Each of these aspects can involve different parties.