EXCUSE me? Y'all have some DAMN nerve. Read the thread and read this response.
Read the entire thread here, which is literally just screenshots of the dude, , and read this response. Y'all have some DAMN nerve.
White dudes would rather come to the defense of men with ZERO expertise in AI claiming to be "AI researchers," leading full blown apocalyptic cults talking about AIR STRIKES, & here we have a leader of a lab coming to his defense after seeing a detailed thread with his writings.
I recommend that everyone read this entire thread & THEN understand the context that those of us who're not into white supremacist apocalyptic cults have to survive in. Its a damn miracle that there are any of us in this field at all, SMH.
Try to imagine anyone OTHER than a white dude being called an "AI researcher" for writing some harry potter fanfic, starting an apocalyptical cult & advocating for air strikes in facilities in the good'ol USA. I bet we'd get @TIME platforms & white dudes coming to our defense!
I can't STAND dudes like this getting their names on papers talking about gender "trends" such. I should be collecting the trophies of privileged white dudes with fragile egos I'm being blocked by. Pretty soon, I'll have the whole #TESCREAL bundle.
dl.acm.org/doi/fullHtml/1…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @timnitGebru@dair-community.social on Mastodon

@timnitGebru@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @timnitGebru

Apr 3
"Why would you, a CEO or executive at a high-profile technology company...proclaim how worried you are about the product you are building and selling?

Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy."
latimes.com/business/techn…
"OpenAI has worked for years to carefully cultivate an image of itself as a team of hype-proofed humanitarian scientists, pursuing AI for the good of all — which meant that when its moment arrived, the public would be well-primed to receive its apocalyptic AI proclamations...
credulously, as scary but impossible-to-ignore truths about the state of technology."

This is why I was so angry when they were announced as such in 2015.
Read 4 tweets
Mar 31
BTW remember that "the letter" listed eugenics as only a "potentially catastrohpic" event, if we note in the footnote of our statement. This shouldn't be a surprise to anyone who knows anything about the institute that produced "the letter" & longermism.
dair-institute.org/blog/letter-st…
[1] We note that eugenics, a very real and harmful practice with roots in the 19th century and running right through today, is listed in footnote 5 of the letter as an example of something that is also only "potentially catastrophic."
"...he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust)...as “mere ripples” when viewed from “the perspective of humankind as a whole.”
currentaffairs.org/2021/07/the-da…
Read 4 tweets
Mar 31
Since we've been looking for more things to do, @emilymbender @mmitchell_ai @mcmillan_majora and I wrote a statement about the horrible "letter" on the AI apocalypse, the very first citation of which, was our #StochasticParrots paper.
dair-institute.org/blog/letter-st… screenshot of this link until the end of the image: https://
On Tuesday...the Future of Life Institute published a letter asking for a six-month minimum moratorium on "training AI systems more powerful than GPT-4," signed by more than 2,000 people, including Turing award winner Yoshua Bengio & one of the world’s richest men, Elon Musk.
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media,
Read 19 tweets
Mar 30
The very first citation in this stupid letter is to our #StochasticParrots Paper,

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"

EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."

They basically say the opposite of what we say and cite our paper?
The rest of the people they cite in footnote #1 are all longtermists. Again, please read
currentaffairs.org/2021/07/the-da…
Read 10 tweets
Mar 6
In this @60Minutes, Microsoft chairman Brad Smith says "its a screen its not a person."

Then why do you keep on misleading people as to the capabilities of your products?

Why did you endorse this manifesto by Sam Altman "planning for AGI and beyond"?
cbsnews.com/news/chatgpt-l…
They are SO incredibly manipulative it makes my blood boil. We see what you've been hyping up, who you put your $10B in, and what they say. Why do the people who create your product talk about how people are stochastic parrots as well?
And your proteges that you're putting $10B into keep on making this point, and then continues to talk about colonizing the cosmos, utopia and how in the next 5 years their tool will read legal documents and medical documents and understand them and such?
Read 12 tweets
Mar 4
"After a conference on AI at the Pontifical Academy Of Science in Rome, discussing with some friends (among them Aimee van Wynsberghe), we argued that the first and foremost AI bias is its name." by Stefano Quintarelli.

h/t @emilymbender
blog.quintarelli.it/2019/11/lets-f…
"It induces analogies that have limited adhrence to reality and it generates infinite speculations (some of them causing excessive expectations and fears)...Because of this misconception, we proposed we should drop the usage of the term “Artificial Intelligence...”
"and adopt a more appropriate and scoped-limited terminology for these technologies which better describe what these technologies are: Systematic Approaches to Learning Algorithms and Machine Inferences."
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(