Kristen Ruby Profile picture
Feb 27 15 tweets 5 min read
Developing a Federal #AI Standards Engagement Plan🧵
The Former Director of Twitters Responsible Machine Learning META division worked at Accenture.

Let's review some highlights from the below doc:

nist.gov/system/files/d…
"We encourage the United States government to adopt a similar whole-of-government approach" Image
This paragraph on adversial attacks with #NLP sounds familiar.

It almost sounds like.. exactly what I reported in The Ruby Files re how Twitter weaponized NLP.

Adversial NLP attacks can come from within the organization - the bad actor is *not* always external. Image
Machine Learning Warfare:

"The model misinterprets the content of the image and misclassifies it. An attacker can tailor the expected behavior of an algorithm to achieve a number of outcomes." Image
"Adversial #AI targets areas of the attack surface that have never previously had to be secured, the AI models themselves.

From now on, organizations need to include these in their security budgets- or risk them being exploited by attackers." Image
"Poisoned training data" Image
"We would support NIST developing a sandboxing scheme for AI"

"....test and pilot AI algorithms and tools..." Image
Next, let's review some highlights in the academic research paper: Image
If you read the research paper carefully, you will notice a common pattern emerge.

Policy rec. in the name of AI fairness- and a rec. to change results of the model if the output does not align w/ their definition of "fairness."

This is not academic research. It is activism. Image
Machine Learning- Communism 3.0

"While equal opportunity is enforceable." Image
In The #RubyFiles data- another pattern emerges.

NIST was repeatedly flagged as an N-Gram in the government agency category.

Why was Twitter using Machine Learning to monitor for NIST mentions? Image
Do you understand how dangerous this is?

AI “ethicists” have completely circumvented elected officials for well over 5 years now to implement whatever aligns w/ their worldview.

All in the name of “AI fairness”
I want to be clear in my language on this. When I say they- I am referring to a small group of people in ML who have hijacked the tech for nefarious purposes. I am not referring to the entire industry. There are many other people doing great work and find this deplorable.
I believe in the power of AI/ML. I want America to win the AI war. But we won’t lead with nefarious actors infiltrating the government.

They are using ML as a weapon to deploy personal ethics in the name of “AI fairness.”

It is unethical, immoral, and in any other industry-… twitter.com/i/web/status/1…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kristen Ruby

Kristen Ruby Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sparklingruby

Mar 1
If AI chatbots keep defaming living people in search- a class action will be next.

These AI chatbots are outright defaming people in the AI-generated answers.

That is not legal and is a blatant violation of free speech laws re defamation.
“Defamation occurs if you make a false statement of fact about someone else that harms that person’s reputation. Such speech is not protected by the First Amendment and could result in criminal and civil liability.”
“Defamation against public officials or public figures also requires that the party making the statement used “actual malice,” meaning the false statement was made “with knowledge that it was false or with reckless disregard of whether it was false or not.”
Read 7 tweets
Mar 1
I find this highly disturbing.

There was no hacker. No one hacked anything.

Why is Bing basically calling me a criminal?

“What are the legal issues and risks for Kris Ruby?” ImageImage
“She says she is willing to cooperate with any law enforcement investigation.”

Did I say that? It’s almost like Bing is setting me up as a criminal and telling me what is about to happen next.
“I think they are fake and she is lying” Image
Read 6 tweets
Mar 1
“Does the product actually use #AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again.”

- Federal Trade Commission ftc.gov/business-guida…
“If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.” -@FTC
“Whatever it can or can’t do, #AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.”
Read 8 tweets
Feb 26
AI / ML Researchers and Directors of ML Fairness pose a significant threat to society.

In many respects, they hold more power than some members of Congress.

They have the power to enact policy without even debating it.

They can directly embed their worldview in LLM without… twitter.com/i/web/status/1…
Their definition of “fairness in ML” often means rewriting history.

If they don’t like the results, they will literally change them to align to their worldview.

It’s hard to explain how dangerous this is for the future of humanity in a tweet.
To be clear, there is a difference between legitimate researchers and those who are activists disguised as researchers.

Many bad actors have infiltrated academic institutions re ML/AI to implement political policy.

This has largely gone unchecked for years.
Read 15 tweets
Feb 26
How can writers opt-out of having their work used in training data?

This is a serious IP issue. #AI
This is not the first time this has happened.

I am noticing a pattern.

I ask the AI a question about a topic I have extensively published on- and the answer it generates is verbatim from my own writing/ reporting.
What is the incentive of publishing online only to have your writing be used in training data that you never consented to?

Over time, this will result in less people publishing content online if their work is used without their consent to train ML.

Zero upside for the writer.
Read 11 tweets
Feb 25
In 2019, Twitter acquired Fabula AI

“Fabula is a particularly notable acquisition, as the the underlying technology is squarely focused on fighting the spread of misinformation online.”

venturebeat.com/ai/twitter-acq…
“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score.

So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. techcrunch.com/2019/02/06/fab…twitter.com/i/web/status/1…
“Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional patentimages.storage.googleapis.com/19/f6/c8/1e402…twitter.com/i/web/status/1…
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(