Just in: the TRO order in NetChoice v. Yost (the Ohio Parental Notification lawsuit).
This order should serve as another massive warning to policymakers aiming to enact similar legislation this year. We have standing and we will show up every time. drive.google.com/file/d/1vMcQdQ…
BTW NetChoice had to get a TRO before the preliminary injunction hearing because the Ohio law is set to go into effect next week.
@NetChoice I love y'all's explanation here in particular:
Strong start -- the Court acknowledges NetChoice's Constitutional standing and the compliance burdens associated with the Act.
The real highlight though is the Court's acknowledgement of NetChoice's standing to bring claims on behalf of both its members AND Ohioan minors.
Regarding irreparable harm, the Court notes that there is no way that NetChoice's members could recoup the costs incurred to comply with the law should it be struck down at a later point.
Regarding vagueness, the Court was not at all impressed with the State's attempt to clarify who the law applies to, calling attention to the loose 11 factor list set out by the Act.
The Court is similarly unimpressed w/the vague exception for "established" media entities.
Note to policymakers, attempting to only target social media companies you don't like is absolutely screwing you in court EVERY SINGLE TIME.
Without more, this Court already sees a path forward for NetChoice given the Act's blatant constitutional defects.
A bombshell from the Court re: rights of minors to access speech. The Court pithily cites SCOTUS, determining that the law is clear: content-based regulations seeking to target minors are absolutely subject to strict scrutiny.
Congrats kiddos -- you have First Amendment rights!
The Court concludes with this fantastic quote:
"Foreclosing minors under sixteen from accessing all content on websites that the Act purports to cover, absent affirmative parental consent, is a breathtakingly blunt instrument for reducing social media's harm to children." 🔥
With that, it continues to bewilder me as to why states are still pushing these constitutionally defective, hamfisted, anti-information, anti-youth, pro-censorship bills.
Your colleagues are badly LOSING this fight. At some point it's got to be embarrassing...right?
@threadreaderapp unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today, the Supreme Court announced their opinion in Murthy v. Missouri.
This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵 supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.
Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.
I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.
In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.
As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."
Turns out, like my Supreme Court predictions, I was only slightly off...
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.
The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…
If you're going to talk about me, why not @ me? Are you afraid of my response?
At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.
[the following is my own opinion, not my employer's].
Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.
Pure violence and hatred.
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.
Each of these aspects can involve different parties.