Kristen Ruby Profile picture
Jan 12 34 tweets 7 min read
A thread on the recent report published by @OpenAI: 🧵
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Image
Academic researchers often lack sufficient levels of introspection to understand the potential abuse of the technology they help create through the word list suggestions they make to AI developers, Data Scientists, & Trust and Safety at big tech companies in The United States.
Direct attempt to influence policy through recommendation.

This is exactly what plays out at big tech companies.

Academic researchers suggest word lists to monitor for “misinfo.”

Trust and Safety & Data Scientists then follow the recs provided from the academic experts.
Aka: One option for the US is to monitor the term XX.

We find this term to be likely associated w/ misinfo.

Trust & Safety tells Data Scientists to add new misinfo word to param.

Data Science team adds word.

No one stops to question if the word was ever misinfo.
Image
Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Kristen Ruby

Kristen Ruby Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @sparklingruby

Feb 26
AI / ML Researchers and Directors of ML Fairness pose a significant threat to society.

In many respects, they hold more power than some members of Congress.

They have the power to enact policy without even debating it.

They can directly embed their worldview in LLM without… twitter.com/i/web/status/1…
Their definition of “fairness in ML” often means rewriting history.

If they don’t like the results, they will literally change them to align to their worldview.

It’s hard to explain how dangerous this is for the future of humanity in a tweet.
I don’t think people truly understand what “bias in AI” or “responsible machine learning” means. It does not mean what you think it does. It often means the exact opposite. The field has been completely weaponized by academic institutions and big tech alliances to “combat… twitter.com/i/web/status/1…
Read 12 tweets
Feb 26
How can writers opt-out of having their work used in training data?

This is a serious IP issue. #AI ImageImageImage
This is not the first time this has happened.

I am noticing a pattern.

I ask the AI a question about a topic I have extensively published on- and the answer it generates is verbatim from my own writing/ reporting.
What is the incentive of publishing online only to have your writing be used in training data that you never consented to?

Over time, this will result in less people publishing content online if their work is used without their consent to train ML.

Zero upside for the writer.
Read 7 tweets
Feb 25
In 2019, Twitter acquired Fabula AI

“Fabula is a particularly notable acquisition, as the the underlying technology is squarely focused on fighting the spread of misinformation online.”

venturebeat.com/ai/twitter-acq…
“Imagine that we find a strong correlation between the political affiliation of a user and this ‘credibility’ score.

So for example we can tell with hyper-ability that if someone is a Trump supporter then he or she will be mainly spreading fake news. techcrunch.com/2019/02/06/fab…twitter.com/i/web/status/1…
“Step forward Fabula, which has patented what it dubs a “new class” of machine learning algorithms to detect “fake news” — in the emergent field of “Geometric Deep Learning”; where the datasets to be studied are so large and complex that traditional patentimages.storage.googleapis.com/19/f6/c8/1e402…twitter.com/i/web/status/1…
Read 8 tweets
Feb 23
The Ruby Files reveal that Twitter weaponized Machine Learning against American citizens. 

This was not merely an AI pilot, but rather, it was in full-fledged production.

ChatGPT is a distraction from the real issue at hand: not how #AI could be used rubymediagroup.com/twitter-artifi…twitter.com/i/web/status/1…
Twitter’s Machine Learning models influence everything from global elections to information control during a pandemic.

These are powerful levers of communication as a form of AI warfare during a digital arms race.

While Musk fundamentally lacks controls of the models in the… twitter.com/i/web/status/1…
Revealing the inner workings of the models would ultimately reveal that Twitter will never truly be a free speech platform. 

Not because of the owner- but because AI is not neutral. 

Releasing machine learning training documents would unravel that narrative.

A social media… twitter.com/i/web/status/1…
Read 9 tweets
Feb 23
"In the first quarter of 2015, DARPA conducted the Twitter Bot Detection Challenge:

A 4-week competition to test the effectiveness of influence bot detection methods developed under the DARPA (SMISC) program.

The challenge was to identify influence bots supporting a… twitter.com/i/web/status/1… Image
Image
Read 14 tweets
Dec 9, 2022
BREAKING: Former Twitter employee shares exclusive details with me on AI, Access to DMs, and more.

Thread below ⬇️
1. What is guano? Image
2. Guano further explained: Image
Read 36 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(