Yesterday, @ProgressChamber submitted comments responding to the U.S. Copyright Office's Notice of Inquiry on AI and Copyright.
In sum, we suggest that existing copyright law and fair use principles adequately address the latest advancements in #GenAI.🧵 acrobat.adobe.com/id/urn:aaid:sc…
1. The capabilities of Generative AI can be a foundation for idea formulation and inspiration for artists and creators more generally. Among other things, Gen AI improves content moderation, revolutionizes medical research, enhances education, and bolsters autonomous vehicles.
Indeed, the societal benefits of Gen AI are readily apparent. Policymakers must keep these benefits in mind as they approach regulation. Otherwise, we may never realize the tech's true potential.
Copyright law is one major threat to AI that could deliver that crushing blow.
2. Gen AI does not currently pose any unique challenges not currently addressed by existing copyright law and fair use principles.
Fair use ensures that rightsholders cannot monopolize creative ideas (or styles) creating an ecosystem conducive to continual innovation.
3. Intermediate copying has always been fair use.
Gen AI aligns with human learning, where exposure to existing works shapes and influences fresh creations, rather than simply piecing together existing content. It is undoubtedly transformative.
We saw the same in Field v. Google which established Google's scraping of publicly available information online for the purpose of creating a search index is a transformative use and therefore fair.
We saw similar results in Authors Guild v. Google books and Google v. Oracle.
4. Gen AI outputs are unlikely to be substantially similar to works in the training set. And when they are (because the training set isn't adequately diverse / large), infringement will be properly addressed by current law.
The Getty Images case is an interesting example 👇🏻
(in fact, the recent SCOTUS decision in Warhol provides even more to support for rightsholders claiming substantial similarity, especially when the use is commercial).
5. Artistic style must remain out of scope of copyright law.
However, this doesn't render rightsholders powerless. While style could never be an artist’s primary grievance, courts still evaluate it alongside the defendant's overall expression.
Even now, without any alterations to existing copyright law, AI providers are already taking proactive measures to protect artists.
For instance, StabilityAI and OpenAI have since modified their services to deny requests that mimic established artists’ styles.
6. Legislation must provide incentives for AI companies to improve their models and preserve human expression.
Similar to their social media counterparts, providers of Gen AI services need assurance that they will not be inundated w/litigation when users abuse their products.
Given the inherently opaque nature of Gen AI and the unpredictable behavior of human users, providers of Gen AI services need liability safeguards that shield them when users intentionally submit infringement-driven queries.
In the same vein, providers of Gen AI services should not automatically obtain legal knowledge of infringement based on user input alone. Nor should they acquire legal knowledge based on what they learn from improving their data sets and safeguards to prevent copyright abuse.
A 'DMCA for AI' may seem tempting, but we have decades of lessons learned from the pitfalls of the notice-and-takedown framework and its impact on the industry and user expression.
Policymakers now have an opportunity to create something better and more sustainable for AI.
In sum, copyright law and fair use are working as intended, even as AI for consumer use continues to advance.
In addition to copyright, rightsholders have other means of addressing their grievances over unauthorized access to their works (e.g. CFAA, trespass to chattels).
If policymakers flood the zone w/brittle and unnecessary copyright policy, we could witness the curtains fall on Gen AI entirely.
Worse, reactive approaches to AI policy could also bleed into other aspects of the web, threatening the basic functionality of the Internet and UGC.
or to quote the great @PamelaSamuelson:
"Copyright law is the only law that’s already in existence that could bring generative AI systems to their knees."
Breaking: Judge Orrick dismissed most of the claims brought by the artists in Andersen v. Stability AI.
This was an unsurprising result. Just because the technology is "new" doesn't mean we disregard current law. The claims were doomed regardless of AI.🧵acrobat.adobe.com/id/urn:aaid:sc…
The holding isn't precedential. It reaffirms long standing copyright principles:
(1) General pleading is not enough. Plaintiffs must identify specific works that were allegedly infringed.
(2) No infringement for outputs that are not substantially similar to a protected work.
Quick recap:
Andersen et al allege Stability AI infringes works by providing the works to Stable Diffusion for training. They also allege all Stable Diffusion outputs are derivative because the training data consists of protected works.
Note -- the complaint is heavily redacted throughout, making it difficult to opine on some of the claims (such as COPPA). Those facts matter, so we'll have to wait for more details to come out.
The dissent finds the majority's decision to stay the injunction "highly disturbing" noting that the case is unlikely to be resolved until spring of next year, allowing government to interfere with social media's editorial decisions until then.
The dissent also argues that the potential harms raised by the govt (such as precluding govt actors from speaking publicly about sensitive events) are unfounded and speculative, undeserving of the stay.
The CA case involves numerous complaints by minors alleging addiction claims. The issues raised here are similar if not identical to the issues raised in the federal school district MDL (ongoing). Same analysis follows.
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap.
Social media companies are not products for the purposes of products liability law. Court instead proceeds on the negligence claims, similar to the ones arising out of Snapchat's speed filter in Lemmon v. Snap.
While the govt may have a limited right to restrict the manner of speech in order to protect unwilling viewers of the public, it is expressly forbidden from restricting willing adults accessing legally protected speech.
The latter is the essence of school district suits.
The Court emphasizes Act 689's failure to reach sites like Parler, Gab, and Truth Social (a recurring problem).
If the intent is truly to protect kids from awful content, why not include the sites responsible for some of the most heinous and hateful content produced online?