Kristen Ruby Profile picture
Mar 1 7 tweets 2 min read
If AI chatbots keep defaming living people in search- a class action will be next.

These AI chatbots are outright defaming people in the AI-generated answers.

That is not legal and is a blatant violation of free speech laws re defamation.
“Defamation occurs if you make a false statement of fact about someone else that harms that person’s reputation. Such speech is not protected by the First Amendment and could result in criminal and civil liability.”
“Defamation against public officials or public figures also requires that the party making the statement used “actual malice,” meaning the false statement was made “with knowledge that it was false or with reckless disregard of whether it was false or not.”
“Defamation is the communication of a false statement that harms the reputation of another. When in written form it is often called ‘libel’.
“There is no such thing as a false opinion or idea – however, there can be a false fact, and these are not protected under the First Amendment. When these false facts harm the reputation of others, legal action can be taken against the speaker.”

The “speaker” is the AI chatbot.
The most dangerous part of this is that the search engine is actually committing libel.

This would be the equivalent of sending a takedown notice to Bing about Bing- not about content that appears in search results on Bing- but rather- BY Bing itself.
ChatGPT is nicer than Bing 🫠

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kristen Ruby

Kristen Ruby Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sparklingruby

Mar 1
I find this highly disturbing.

There was no hacker. No one hacked anything.

Why is Bing basically calling me a criminal?

“What are the legal issues and risks for Kris Ruby?”
“She says she is willing to cooperate with any law enforcement investigation.”

Did I say that? It’s almost like Bing is setting me up as a criminal and telling me what is about to happen next.
“I think they are fake and she is lying”
Read 6 tweets
Mar 1
“Does the product actually use #AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again.”

- Federal Trade Commission ftc.gov/business-guida…
“If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.” -@FTC
“Whatever it can or can’t do, #AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
Read 8 tweets
Feb 27
Developing a Federal #AI Standards Engagement Plan🧵
The Former Director of Twitters Responsible Machine Learning META division worked at Accenture.

Let's review some highlights from the below doc:

nist.gov/system/files/d…
"We encourage the United States government to adopt a similar whole-of-government approach" Image
Read 15 tweets
Feb 26
AI / ML Researchers and Directors of ML Fairness pose a significant threat to society.

In many respects, they hold more power than some members of Congress.

They have the power to enact policy without even debating it.

They can directly embed their worldview in LLM without… twitter.com/i/web/status/1…
Their definition of “fairness in ML” often means rewriting history.

If they don’t like the results, they will literally change them to align to their worldview.

It’s hard to explain how dangerous this is for the future of humanity in a tweet.
To be clear, there is a difference between legitimate researchers and those who are activists disguised as researchers.

Many bad actors have infiltrated academic institutions re ML/AI to implement political policy.

This has largely gone unchecked for years.
Read 15 tweets
Feb 26
How can writers opt-out of having their work used in training data?

This is a serious IP issue. #AI
This is not the first time this has happened.

I am noticing a pattern.

I ask the AI a question about a topic I have extensively published on- and the answer it generates is verbatim from my own writing/ reporting.
What is the incentive of publishing online only to have your writing be used in training data that you never consented to?

Over time, this will result in less people publishing content online if their work is used without their consent to train ML.

Zero upside for the writer.
Read 11 tweets
Feb 25
In 2019, Twitter acquired Fabula AI

“Fabula is a particularly notable acquisition, as the the underlying technology is squarely focused on fighting the spread of misinformation online.”

venturebeat.com/ai/twitter-acq…
“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score.

So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. techcrunch.com/2019/02/06/fab…twitter.com/i/web/status/1…
“Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional patentimages.storage.googleapis.com/19/f6/c8/1e402…twitter.com/i/web/status/1…
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(