Jess Miers 🦝 Profile picture
Apr 11 13 tweets 3 min read Read on X
That's one of the major issues with the current discourse around Gen AI and 230. We have to understand the Gen AI stack before we can even consider liability.

IMO 230 could apply to Gen AI for some use cases. techdirt.com/2023/03/17/yes…
In assessing liability we have the platforms that provide the Gen AI services, the developers who create and fine tune the models. We have the folks who create the datasets and the folks who implement the datasets to train their models. We have users who supply inputs.
And we also have the platforms (again) that provide the "guidelines" and guardrails to determine what kinds of AI outputs are acceptable and aligned with the platform's overall editorial position.

Each of these aspects can involve different parties.
The same folks who develop the models may not be the same folks who implement the model as part of their Gen AI service. The folks who created the training data sets may not have anything to do with the model developers or the platforms.

This matters for assigning liability.
(This is also just a high level summary of the Gen AI stack. There is a lot more to it)
So how could 230 come into play:

--platforms that offer plug and play chatbots that use models and data sets developed independently of the platform. (this was the case for DeviantArt's chatbot that relied entirely on Stability AI's model);
--developers who supply models that are misused by the platforms that implement them or the users that jailbreak them;

--users who engage with or repost tortious AI-generated content;
--platforms with Gen AI services that "publish" hallucinations / illegal outputs at the behest of abusive users;

--platforms that implement guardrails that fail against user inputs;

--platforms that feed user input back into the models to improve them;
And then of course you have platforms and developers that purposefully design their models, datasets, and services to produce tortious and illegal content. Or platforms that simply encourage illegal content (like those Gen AI services that create NCEI of real people).
And for those situations we might look to Roommates or Lemmon v. Snap for further guidance.
The point is, the question: does 230 apply to Generative AI is inherently complicated and involves a thorough evaluation the Gen AI stack, the parties involved in the development of Gen AI services (including the users), and derivative liability.

It's not a yes / no Q.
And I'll end with this:

if 230 doesn't protect Generative AI either because the Courts refuse to apply it or Congress amends 230 to carve out AI, we better think of something else because this technology will not survive the legal deluge that is sure to follow.
@threadreaderapp unroll

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jess Miers 🦝

Jess Miers 🦝 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jess_miers

Apr 9
Yeah so this bill is laughably bad.

The Generative AI Copyright Disclosure Act of 2024 requires anyone using a dataset to train AI to disclose any copyrighted works in the set to the U.S. Copyright Office to be displayed via a public database. 🧵 schiff.house.gov/imo/media/doc/…
Copyright attaches automatically to any creative works fixed in a tangible medium of expression.

So, pretty much all works used to train an AI system will require disclosures, regardless of fair use considerations.

(btw you don't "train" a dataset but details). Image
BUT THAT'S NOT ALL!

Datasets are incredibly dynamic, especially when it comes to AI training. So, each time the set is updated in a "significant manner," the notice requirement is triggered.
Read 11 tweets
Mar 28
Yesterday, the Ninth Circuit filed its order in Diep v. Apple. They had me in the first half...

Strong #Section230 ruling regarding Apple's content moderation efforts. Until the Court got to the UCL claims...creating yet another bizarre 230 loophole. sigh. 🧵
Hadona Diep is a cybersecurity professional.

She downloaded an app called "Toast Plus" from Apple's App store thinking it was the "Toast Wallet" for storing cryptocurrency.

It was not the Toast Wallet.
Long after transferring a reasonable sum of crypto to Toast Plus, Diep discovered that her crypto was missing and her account was deleted.

Among other claims, Diep sued Apple under state consumer protection law + negligence for failing to "vet" and remove Toast Plus.
Read 19 tweets
Mar 19
[Lack of] @CommonSense published an explainer on AB 3172 (which adds statutory damages for online harms to kids).

The explainer is laughable and deserves to be publicly mocked for championing the erosion of Californians' speech and privacy rights. 🧵

commonsensemedia.org/sites/default/…
CSM argues that AB 3172 is "only" a statutory damages bill.

But they accidentally said the quiet part out loud: the goal is effectively a prior restraint, forcing online publishers to restrain their protected editorial decisions, if those decisions could "harm" a younger user.
Image
Image
In other words, by levying millions of dollars worth of damages for editorial decisions that could be considered harmful to a child, AB 3172 effectively chills private speech.

That's what it means to "be more careful" when we're talking about private publishers. Image
Read 21 tweets
Mar 19
Just finished listening to the Murthy oral arguments.

The Court appears poised to decide for the Biden Admin on standing grounds, based on lack of traceability and the absence of any threat of future imminent harm.

Transcript / Audio: supremecourt.gov/oral_arguments…
Notably Justice Kagan's reference to platforms as "speech compilers" and recognition of Facebook's policy enforcement as reflective of its viewpoints perhaps foreshadows the Court's direction in the soon-to-decided NetChoice & CCIA cases...
I thought it was also interesting that the Justices (except Alito and maybe Thomas) weren't sold on this being a unique media issue, emphasizing that the same sort of persuasion tactics are employed by government against the traditional media counterparts all the time.
Read 14 tweets
Feb 27
Ventured into the lion's den to oppose the Senate ELVIS Act today, standing alone against RIAA and their cronies.

Despite the bill's passage, we secured a small win: the Senate stated for the record the law exempts AI services from liability for 3rd-party misuse.

Testimony 🧵
Good afternoon, esteemed chairs.

My name is Jess Miers and I serve as Senior Counsel for Chamber of Progress, where we champion technological innovation to benefit all Americans, including the vibrant community here in Nashville, the heart of our nation’s music industry.
We stand before you to express our deep concerns regarding Senate Bill 2096.
Read 18 tweets
Feb 26
Okay now that I've had some time to process, here is where I'm at after today's oral arguments...

I can see this being a 9-0 decision to affirm the preliminary injunctions. The line of questions and discussions from the Court were strikingly similar to Taamneh / Gonzalez.
While the Justices wrestled with some of the particulars of the Texas and Florida laws as they apply to the different types of services offered by the major platforms, one theme emerged throughout:
these regulations, rife with content, speaker, and viewpoint discrimination, are fundamentally at odds with constitutional principles.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(