The CA case involves numerous complaints by minors alleging addiction claims. The issues raised here are similar if not identical to the issues raised in the federal school district MDL (ongoing). Same analysis follows.
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap.
The Court focuses on negligence, first comparing the duty of care owed to pedestrians by electric scooter companies to the duties owed by online publishers.
Except, the provisioning of electric scooters != publishing third party speech.
Court also concludes that the harms complained of by the Plaintiffs are plausibly linked to Defendants' algorithmic designs.
Putting aside though the array of external factors at play in any individual minors' life that predispose them to said harms.
Court also finds reason to attach moral blame to the Defendant social media services, noting that the services could have opted for child-safety measures such as mandatory age verification;
...a measure that another California Court just recently deemed unlikely to comport w/1A
The Court distinguishes liability for violent television programming from the algorithms used to curate and display third party content online, suggesting 1A shouldn't bar the latter.
The distinction is arbitrary as both regard the delivery of content programming.
Court accepts the conduct versus content trope, disregarding that the majority, if not all, of the alleged harms derive entirely from the kind of content displayed to users.
Yet, content curation is acceptable for other mediums? (disregarding Netflix also uses algo curation...)
The Court also accepts that Plaintiffs' proximate causation theory where Plaintiff alleges harms derived from both the usage of TikTok and Instagram, disregarding that both apps are meaningfully different in design and serve distinct purposes and content.
It's beyond me how courts are to divide / assign liability for each alleged harm that could be attributed to numerous different algo designs and content across many different online publishers in addition to other external health + environmental factors at play in a users' life.
As for #Section230, the Court relies on Lemmon, concluding that the ways in which social media sites curate and display content, and provide tools for third parties to publish content, is first party behavior having nothing to do with the role of being a publisher / speaker.
In reaching that conclusion, the Court uses the following examples, all of which essentially regard the facilitation of third party speech: Tiktok's auto-scroll feature, Snapchat's snap-streaks and filters, push notifications, and the lack of age vetting at account registration.
In cleaving these measures from 230, the Court suggests that none have to do with moderating and publishing third-party content.
Yet in practice, each is central to the facilitation of third-party content. Any harms derive entirely from the availability of that content pool.
The Court also relies on an exception for online publishers that meaningfully manipulate third-party content (e.g. changing the underlying meaning, removing warning labels).
The analogy is imprecise. Online services deliver and display content w/o altering the content itself.
The Court adds Section 230(e)(3) permits state negligence claims such as the ones alleged here, within the spirit of Congress' intent.
The conclusion misconstrues the exception and runs directly opposite of Congress' intent to provide a national standard for UGC services.
Doubling down, the Court adds that 230 does not apply to the services' own operations, separating the algorithmic curation of content into its own special conduct category.
But the operations are central to 230. The services' conduct towards UGC is in fact the entire point...
Cubby and Stratton Oakmont, the case law dilemma 230 was explicitly enacted to resolve, was entirely about the services' "operations" as applied to the third-party speech they host: Hands off curation vs. family friendly moderation.
It has always been about publishing conduct.
The Court also attempts to distinguish Dyroff, noting a difference between harms derived from the content itself versus the publication conduct.
Yet, claims regarding eating disorders can't logically derive from publication measures absent the triggering third party content...
Again the Court buys into an arbitrary decoupling of the underlying content and the publication conduct without more.
The Court also rejects Prager, inviting yet another arbitrary distinction within the publishing algorithm itself (i.e. rote algorithmic recommendations vs. personalized algorithmic recommendations).
In practice, such technological distinction is impractical and illogical.
Turning to 1A, the Court pushes the Gonzalez theory that content curation algorithms are more akin to physical book material than the content found in the book itself.
The Court also fails to consider that algorithmic curation and publication are 'expressive activities.'
Again the Court pushes the nonsensical theory that addiction to social media can derive from the publication measures alone absent third party content.
At the same time, the Court seems to disregard the same algorithmic curation components at play for Netflix...
The Court also misconstrues the @NetChoice line of cases, suggesting that content moderation only encompasses the removal of content / users.
Of course, the conclusion disregards the inherent moderation function of curation algorithms designed to prioritize high quality content.
Lastly, the Court rejects the 1A considerations under Sullivan and Tornillo for the sole reason that publication measures, like auto-scroll, are unlike the traditional publication functions employed by newspapers and broadcasters; an unsophisticated SCOTUS-rejected argument.
The government is explicitly barred from encumbering adult access to legal speech.
Yet, that is the entire thrust of these social media addiction suits which have apparently duped this court.
Stay tuned for the inevitable appeal.
Also, let's be clear, the only "reasonable" alternative here that both the Court and Plaintiffs suggest is mandatory age verification for all users across all platforms for any and all content.
It's always about increased surveillance and censorship.
@threadreaderapp unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap.
While the govt may have a limited right to restrict the manner of speech in order to protect unwilling viewers of the public, it is expressly forbidden from restricting willing adults accessing legally protected speech.
The latter is the essence of school district suits.
The Court emphasizes Act 689's failure to reach sites like Parler, Gab, and Truth Social (a recurring problem).
If the intent is truly to protect kids from awful content, why not include the sites responsible for some of the most heinous and hateful content produced online?
🚨 California is about to enact yet another blatantly unconstitutional speech law. SB 680 prohibits websites from using a design, algorithm, or feature that causes harm or addiction for any user 16 years old or younger.
The bill will have extraordinary consequences. 🧵 https://t.co/yuybicLGm4
What California is doing isn't new. When states want to control speech, they use one of two justifications for legislation: (1) national security (Montana TikTok ban); or (2) kids' safety.
The underlying goal is all the same: restrict expression and access to information.
SB 680's enactment will come just WEEKS after Judge Freeman grilled California for their equally unconstitutional AADC legislation.
Judge Freeman didn't buy the State's 'conduct not content' argument then. It's baffling the State is trying it again now.
This afternoon, the DOJ filed their brief advising SCOTUS to grant cert in the @NetChoice and @ccianet speech cases against Texas and Florida.
Two key takeaways: (1) The Texas and Florida laws violate 1A; (2) the mandatory disclosures may not supremecourt.gov/DocketPDF/22/2…
It's been a while so let's recap:
Florida enacted SB 7072 in May 2021. The law creates content moderation restrictions on social media companies, prohibiting them from engaging in certain moderation activities for certain users and topics (e.g. political candidates).
SB 7072 also mandates certain disclosures about the companies' editorial practices. The platforms must also provide an individualized explanation to a user if it removes or alters their posts.
11th Cir held that the content mod provisions violate 1A but the disclosures do not.
One of the reasons @ericgoldman is widely considered a thought leader in this space is his keen ability to anticipate and predict the next iteration of tech law.
This casebook update is a huge deal. The changes reflect the next wave of practice. Lawyers: take notice.
Internet law is taught so differently throughout the nation. One thing I've always particularly respected about @ericgoldman's curriculum is that it's so practical and fundamentals-focused that passing tech fads almost never necessitate their own updates.
So, when a major curriculum update like this one occurs, I pay attention.
Yesterday we filed an amicus brief in support of App Stores, developers, and consumers, urging the Ninth Circuit to affirm #Section230 protections for in-app payment processing.
The alternative would cause chaos for financial privacy/security, and harm the creator economy.
Plaintiffs in this case are relying primarily on a loophole from the HomeAway case which abridged 230 protections for "transactions" involved with the underlying content at issue (i.e. illegal home sharing listings).
The same result would hose small app developers.
In-app payment processing is core to app revenue for creators and the app marketplace. Holding App Stores liable for providing their in-app payment tools to developers is a surefire way to discourage in-app payments generally.