Jess Miers 🦝 Profile picture
Jun 7 12 tweets 3 min read Twitter logo Read on Twitter
Here we go. Defamation case against OpenAI regarding allegedly false ChatGPT outputs. H/T @CathyGellis courthousenews.com/wp-content/upl…
More coverage from Eugene Volokh here: reason.com/volokh/2023/06…
(I'm already getting the #Section230 questions...)

I agree w/Prof. Volokh's take overall. I can see the complaint failing without needing to even reach the 230 issues. It doesn't seem OpenAI was put on notice of the alleged false output by the plaintiff + damages are suspect. Image
Remember: Section 230 is a defense. But before we even reach 230, we have to ask whether the complaint is viable in the first place. Here, it's likely not.

If anything, this case will likely be another example of defamation cases against websites that fail on the pleading.
FWIW @ericgoldman and I captured many similar cases in our research here: papers.ssrn.com/sol3/papers.cf…
Also -- it appears that the Plaintiff is alleging malice on OpenAI's part which is also highly suspect. I doubt OpenAI has that level of granular control when it comes to these random ChatGPT outputs.
Prof. Volokh also makes a really crucial point regarding the knowledge question -- just because OpenAI may have a general awareness that ChatGPT sometimes spits out garbage, doesn't necessarily mean that they have knowledge of this specific incident.
We just dealt with a similar knowledge issue in the Taamneh case. Just because a company has general knowledge that their products and services could be used to perform illegal uses doesn't mean that the company is liable for any instance of those uses.
We want OpenAI to have that general awareness that ChatGPT sometimes provides garbage outputs.

That feedback is crucial so that OpenAI can continue improving ChatGPT.
Lastly, we should think carefully about these kinds of cases (this one certainly won't be the last). What do we want OpenAI to do here? Perhaps they could provide more disclosures that urge folks not to rely on anything ChatGPT says as fact. But that's about it.
It's pretty much all or nothing with this kind of technology. In using it, we accept that there will be a lot of junk. But the alternative very well might be ripping the service off the market entirely.

Is that the desired outcome?

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jess Miers 🦝

Jess Miers 🦝 Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jess_miers

Jun 4
Weird paternalism aside, we need to actually talk about this because this proposal is unfortunately not unique.

In fact, many states have proposed (and some even enacted) identical legislation. These kinds of laws will harm more than help kids. Let's dig into it.🧵
First off, there has never been a "pre algorithm" era of the web. The Internet is built on algorithms. TCP/IP, the foundational communication protocol for the web, is literally an algorithm.

So, to technologists, legislation that "bans algorithms," is nonsense.
Social media and other online services are an amalgamation of algorithms. Google's "page rank" algorithm is how we get results perfectly tailored to our inquiries.

Facebook's newsfeed and Twitter's TL all use algorithms to ensure we see relevant content.
Read 20 tweets
Jun 1
What's egregious is that a California Representative is copying legislation from regions that consistently fail to preserve speech and expression.

Asm Wicks, we are not the EU. We are not the AU. Why are you enabling foreign control over California companies? #CJPA
Further, what is your end goal for Silicon Valley tech?

With the AADC, you're forcing tech companies to perform age verification on all of your constituents, further collecting sensitive identification info (largely in violation of other CA privacy laws).
You've created an environment where tech companies are discouraged from improving their products and services for kids and families.

And in fact, you're encouraging companies to block minor users entirely, disrupting their lives and frustrating their access to resources.
Read 16 tweets
May 31
"Everything about this is filthy and corrupt. It’s literally Rep. Buffy Wicks and others in the California legislature saying “we’re forcing companies we dislike to give money to companies we like.”
techdirt.com/2023/05/31/cal…
"Publications that make less than $100k per year are not eligible, so independent journalists or small one or two person journalism outfits are cut out of the deal. Hell, Techdirt would likely be excluded."
"The government picking and choosing which journalism orgs get cut into the corruption seems… problematic?"
Read 4 tweets
May 30
No, they really aren't. They solve none of the problems raised about the bill and in some aspects, make the bill even worse.

*70% of rev goes to this nebulous journo support (meaning 30% goes to the hedge funds that are actually killing small journos);
*Journos making under $100k/year revenue are still not even eligible for this "protections" under this bill (meaning a majority of smaller / marginalized news outlets will be left behind in favor of larger outlets);
*The arbitration provision is even more of a cluster than before. Now it allows for a joint coalition to arbitrate against the websites, but notice to join said coalition must be made before Feb 2024 (meaning market entrants after Feb 14 don't get to take advantage of it anyway).
Read 5 tweets
May 30
"The harms of unregulated social media are established and clear" No, they're really not, Governor @GavinNewsom. In fact, that assertion is incredibly intellectually dishonest as it disregards several studies that demonstrate otherwise.
You don't get to just declare 'harm to kids' as a convenient excuse to abridge the 1A rights of California citizens. We don't tolerate that sort of EU-inspired paternalistic B.S. in this country.
You made a bad law. Now it's up to the Courts to decide whether it sticks. If you're confident that it can hold up to strict scrutiny, then you shouldn't have any concerns.

But inserting yourself into the necessary checks in place to temper your power is pretty gross.
Read 4 tweets
May 18
Alright, now that I've had more than a minute to think about all of this, some additional thoughts:

This was the *best case scenario* for these cases today. The Taamneh opinion only reinforces the status quo (a major win for websites AND users). #SCOTUS
I have to say, it was surprising to see such a clean win authored by Justice Thomas no less. I think many of us assumed he would have been eager to undermine existing precedence around 1A and even 230 as applied to online publishers.

That's not at all what occurred here.
Keep in mind of course, this was a very narrow issue for SCOTUS (aiding and abetting law). So, it doesn't really come as a surprise that we got such a clean decision here.

The bigger concern really was how the Court would approach Gonzalez. They got it exactly right.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(