The Court emphasizes Act 689's failure to reach sites like Parler, Gab, and Truth Social (a recurring problem).
If the intent is truly to protect kids from awful content, why not include the sites responsible for some of the most heinous and hateful content produced online?
Inviting testimony from a UK "age verification expert" was the State's fatal flaw imo.
UK OSB and AADC are not comparable to the laws enacted here in the states. Plus our litigation systems operate entirely differently. It's apples to oranges.
And there you have it. The expert himself admits that state of the art age verification requires a user to upload sensitive identification documents to a third party vendor for assessment; the very thing we've been warning about since day 1 of the age verification epidemic.
I hadn't actually considered voice uploads for AI analysis as a means to verify age / ID.
BTW scammers these days increasingly use voice data to impersonate calls to financial institutions on behalf of their victims. Surely these 3P vendors are fully prepped for MTM attacks?
BTW lol at "takes only a minute."
It took me half an hour to complete my verification to become an Amazon seller as Amazon's ID software kept failing to match my selfie to my ID photo.
Imagine having to go through this for every website you wish to access.
.@ericgoldman addressed this burden in his amicus brief supporting Netchoice in NetChoice v. Bonta (re: California AADC). papers.ssrn.com/sol3/papers.cf…
The Court doesn't mince words regarding Netchoice's standing, acknowledging the harm that Netchoice's members are likely to incur once the law goes into effect.
This sends an important message to other states considering these laws. The trades can and will sue.
Additionally, the Court's holding on prudential standing is huge. Not only can NetChoice challenge the law on behalf of its members but also on behalf of its members' customers (i.e. the users whose 1A rights will be abridged as a result).
AGAIN THIS IS HUGE FOR THE TRADES!
The Court first holds that 689 is unconstitutionally vague. It's unclear who the law actually applies to. The State screwed up with conflicting testimony from their UK expert suggesting Snap is in scope...yikes.
This shows the challenge of attacking certain social media co's...
This is a crucial problem inherent in all age verification legislation: how is a service supposed to know whether a parent truly granted consent? Kids can have different last names, parents could both grant and revoke access (which parent is right?). What about foster kids?
The State will likely never use this expert again....
Turning to the First Amendment, the Court rejects the State's argument that 689 is like any law prohibiting minors from bars and casinos.
The Court: bars and casinos are NOT speech! 📣🔥👏
This exchange is too funny to not include. Apparently the whole mall is a bar...? Yikes.
Regardless, the Court decides to apply intermediate scrutiny to the 1A claim. This has nothing to do with the merits of the claim, rather, the Court did not wish to opine this early in the proceeding.
Nevertheless, the Court still finds the law to be overly burdensome as it reaches adult access to constitutionally protected speech. Beautiful cite to Reno v. ACLU.
Other states better take notice.
The Court reiterates the valid security concerns users may have in turning over their identification data to 3P vendors. This conclusion brought to you by the State's super helpful expert again! 😂
Another 💣. The Court concludes that Act 689 violates minors' 1A rights to access information. Cites to Brown and Reno.
The Court again calls bullshit on the State's intent, scoping in FB and TikTok but not several other services with large audiences comprising kids (YT included).
The Court also notes the law addresses account creation not time spent on the service.
The Court concludes that 689 is not narrowly tailored to address any of the content harms raised by the State.
In sum, parents can grant consent for their kid to create an account but the kid can still encounter the content that the law is apparently aimed to correct.
The Court finds a likelihood of irreparable harm for the services forced to comply with 689 and for their users who will lose access to protected speech and expression.
This is a fantastic opinion. Still, I remain cautiously optimistic about its survival on appeal.
@threadreaderapp unroll
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Today, the Supreme Court announced their opinion in Murthy v. Missouri.
This case illustrates the complexities of online content moderation and offers some interesting insight into how the Court might rule on the long-awaited NetChoice & CCIA cases. 🧵 supremecourt.gov/opinions/23pdf…
The COVID-19 era was as confusing as it was terrifying. It was an era of extensive mask wearing, wiping down amazon packages, zoom funerals, online classrooms, and lots and lots of mis and disinformation about the disease.
Horse tranqs, bleach injections, you name it.
At the time, much of this mis/disinformation spread on various online services, Facebook and Twitter included. The sources were a mix of so-called experts, public interest groups, conspiracy theorists, and even our own government.
I’m excited (and sad) to share that I will be leaving @ProgressChamber. I’ve accepted joint research fellowship positions at @santaclaralaw and @AkronLaw, focused on producing AI scholarship.
In other words, I’m officially in my academic era!
Last year, during my annual evaluation, I told @adamkovac that there was only one thing that could entice me to seriously consider leaving Chamber of Progress.
As many of you know, that one thing is an opportunity to achieve my lifelong dream of becoming a TT law professor.
At the time, I hadn't expected this opportunity to present itself anytime soon. In fact, I told Adam "but don't worry, that's like 5-6 years from now."
Turns out, like my Supreme Court predictions, I was only slightly off...
I published an article on California SB 1047, a bill that would effectively prohibit new AI model developers from emerging.
The bill does not apply to existing (derivative) AI models or models built upon existing models. It's the worst I've seen yet. 🧵 medium.com/chamber-of-pro…
If you're going to talk about me, why not @ me? Are you afraid of my response?
At no point did I say my tweets are representative of my employer. And you know that -- as you said, I'm tweeting on a Sunday afternoon, outside of working hours.
[the following is my own opinion, not my employer's].
Last night, @ CreatureDesigns (Mike Corriero) posted an image of @brianlfrye, a Jewish law professor, depicted as hitler + an image implying Brian's pending execution.
Pure violence and hatred.
Prior to that post @ CreatureDesigns was engaged in a "discussion" with myself and Brian about fair use and AI. Brian and I are notoriously pro-AI innovation and pro free expression (for which the Fair Use Doctrine is intended).
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.
Each of these aspects can involve different parties.