The bill adds a carve out to Section 230's statutory exceptions for any civil suit or criminal prosecution where the underlying claim has to do with Generative AI.
The bill then defines "Generative Artificial Intelligence" broadly (and circularly) as any AI system capable of generating content based on user input.
Notice that it says prompts *OR* other forms of data provided by a person.
Implications:
1. Obviously, the bill reaches any company providing generative AI products for consumer use (think ChatGPT, Bard, Stable Diffusion, Midjourney, etc).
There has been plenty of discourse surrounding 230's applicability to those products.
Earlier this year, I wrote an article proclaiming that Section 230 should apply to most civil suits where the underlying claim regards outputs that were influenced by users.
But as I've said over and over earlier this year, regardless of differing interpretations, providers of Generative AI are at risk of being inundated with Plaintiff suits that will spell the end for this technology.
It's the exact same issue we saw during the early CDA debates.
When the Communications Decency Act was enacted, groups like ACLU flocked to the courts to explain how the CDA would undermine the Internet.
The Supreme Court was at a cross-roads in Reno. Fortunately, they recognized the importance of providing room for the Internet to grow.
Had the Supreme Court ruled differently in Reno, the landscape of the modern web would be drastically different. In fact, it likely wouldn't even exist.
Instead, we might find ourselves confined to a more restricted, 'walled garden' identical to the 90's web.
We have reached that same cross-roads with Generative AI today.
If by law, Section 230 cannot be used as a defense to abusive user prompt engineering, then Gen AI companies new and old will have to decide if their ventures are truly worth it.
Plus, keep in mind that the VCs of Silicon Valley understand these immense legal risks as well. If by law, Section 230 is no longer available, that adds yet another risk factor for VCs considering investment in AI upstarts.
The viability of the 230 defense is totally an aside to this debate. We won't know if Section 230 will apply to certain claims against Generative AI companies until we test it in court.
Hawley's bill forecloses that opportunity to test the defense.
Worse, the bill assumes that all claims against Generative AI companies will be uniform. But as we all know, Generative AI is advancing rapidly, and with each iteration and innovation, there will be a clever Plaintiff lurking around the corner to get their bag.
Take the Mark Walters case in Georgia for example: Walters v. OpenAI.
In that case, a reporter asked ChatGPT repeatedly to summarize a case involving the Second Amendment Foundation (SAF). ChatGPT responded several times that it couldn't read the document (no access to web).
Still, the reporter persisted. Eventually, ChatGPT spun a tail about Mark Walters embezzling funds from SAF. According to Walters, this is patently false and Walters had nothing to do with SAF.
But ChatGPT didn't spin this tail alone. It had help from the reporter.
Here, reporter (Fred Riehl) coaxes ChatGPT to complete its story. What this signals to ChatGPT is that the Walters story it produced was acceptable and that the user (Riehl) wishes to develop this story further.
It's clear from the chat logs that Riehl's role was significant.
Riehl then called up Mark Walters and told him about his exchange with ChatGPT. Walters and Riehl tried to get ChatGPT to produce the results again in another chat, but ChatGPT refused.
That means that the false narrative was only ever seen by Riehl.
In my opinion, this is a case where a Section 230 defense could be viable to the extent that Riehl played a significant role as the information content provider by engineering his prompts to develop the Walters story.
ChatGPT doesn't operate without user input.
Here is an excellent article by @CathyGellis explaining that Section 230 has always been about exploring who imbued the product with the illegal content: techdirt.com/2023/03/23/howβ¦
But again, this is all completely aside from the problem today. We can go back and forth all day on whether 230 applies to certain instances of Gen AI hallucinations. But none of it matters if there's a statutory exception preventing us from even making those arguments.
And I think everyone in the 230 / speech community, even those who disagree that 230 could / should protect Gen AI providers, can agree that we as lawyers should at least be able to make the argument, especially in cases like Walters v. OpenAI.
2. The bill also extends beyond providers of Gen AI by defining Gen AI as any AI system capable of doing AI.
For example, algorithmic curation (i.e. the way social media displays content to us) is an AI system that operates based on user input.
IMO this is the true ulterior motive behind the bill. We're already seeing Plaintiffs get by 230 by framing their claims as "negligent design" instead of third-party content.
This new AI exception makes it even easier for Plaintiffs to do the same for any company that uses AI.
Which means that not only are the Gen AI upstarts at risk, but now any startup that uses AI to deal with user generated content is at risk too.
This AI exception, coupled with the precedent developing around "negligent design" will be the nail in 230's coffin.
We are still in the early stages of Generative AI. We're still discovering the many ways in which it can be used to improve our lives. Companies are using AI to improve their products (like Google incorporating Gen AI into Search to provide relevant results).
We're on the brink of losing our edge in Generative AI and stifling future innovations, all due to misplaced anti-tech sentiment.
Our startup-friendly culture once set us apart from the EU, but now, we're just mirroring their playbook.
I'm at UCLA today for the California Senate Judiciary information hearing on "The Importance Journalism in the Digital Age" (i.e. the notorious CJPA). I'll be live tweeting from @ProgressChamber starting around 1pm PT.
Expect the snark(ier) commentary @ my personal account. π
Last year, multiple school districts and private plaintiffs across the nation filed complaints against social media services: Google, Meta, Snap, and TikTok.
This order addresses the first wave of complaints from the school districts and individuals.
The "master complaint" combining all of the claims so far is ~ 300 pages asserting 18 claims brought under various state laws on behalf of hundreds of plaintiffs.
For efficiency, the federal court required plaintiffs to identify their top 5 priority claims. π
The reality though is that controls like fair use must exist to stop rightsholder orgs (like the RIAA, MPA) from monopolizing ideas and expression that inevitably stifle the independent artists. π§΅
Using existing works to create new, transformative works isn't unique to the AI industry.
It's a principle of creativity that stands to be fundamentally destroyed should the copyright monopolists get their way here.
If simply being inspired or influenced by existing works amounts to infringement, then the ability to create based on our own experiences and learnings about the world will no longer be possible.
There will always exist arguments that some artist's work was inspired by another.
There's obviously a ton of AI / copyright buzz these days, primarily dominated by the following topics:
1. Fair use for model training 2. Output infringement 3. Ownership of AI generated works
But what about secondary liability for the providers of #GenAI tools?
I imagine there will come a time when courts assign direct liability to users (or prompt engineers) who supply AI generators with prompts driven to:
(1) compromise training sets (if the service doesn't properly sanitize inputs); and or
(2) generate blatantly infringing outputs
And when that inevitably happens, there will be a question as to whether the provider of the Gen AI tool is contributorily / vicariously liable (inducement aside for now). What then?
I've seen arguments that because the AI isn't human, it can't also be an infringer.
I'm reading the transcript of the recently argued Lindke v. Freed SCOTUS case.
This case tees up an important Internet law issue for the Court: when does a public official's social media activity constitute state action? π§΅acrobat.adobe.com/id/urn:aaid:scβ¦
Quick Recap: Respondent James Freed is city manager of Port Huron, Michigan where Petitioner Kevin Lindke resides. Freed blocked Lindke from accessing Freed's Facebook page. Lindke argues Freed's action violated his 1A rights. The 6th Cir disagreed.
This case parallels O'Connor-Ratcliff v. Garnier, involving similar facts (govt blocking of citizens). Unlike the 6th Cir, the 9th Cir held that blocking does constitute state action.
Yesterday, @ProgressChamber submitted comments responding to the U.S. Copyright Office's Notice of Inquiry on AI and Copyright.
In sum, we suggest that existing copyright law and fair use principles adequately address the latest advancements in #GenAI.π§΅ acrobat.adobe.com/id/urn:aaid:scβ¦
1. The capabilities of Generative AI can be a foundation for idea formulation and inspiration for artists and creators more generally. Among other things, Gen AI improves content moderation, revolutionizes medical research, enhances education, and bolsters autonomous vehicles.
Indeed, the societal benefits of Gen AI are readily apparent. Policymakers must keep these benefits in mind as they approach regulation. Otherwise, we may never realize the tech's true potential.
Copyright law is one major threat to AI that could deliver that crushing blow.