EXCUSE me? Y'all have some DAMN nerve. Read the thread and read this response.
Read the entire thread here, which is literally just screenshots of the dude,
White dudes would rather come to the defense of men with ZERO expertise in AI claiming to be "AI researchers," leading full blown apocalyptic cults talking about AIR STRIKES, & here we have a leader of a lab coming to his defense after seeing a detailed thread with his writings.
I recommend that everyone read this entire thread & THEN understand the context that those of us who're not into white supremacist apocalyptic cults have to survive in. Its a damn miracle that there are any of us in this field at all, SMH.
Try to imagine anyone OTHER than a white dude being called an "AI researcher" for writing some harry potter fanfic, starting an apocalyptical cult & advocating for air strikes in facilities in the good'ol USA. I bet we'd get @TIME platforms & white dudes coming to our defense!
I can't STAND dudes like this getting their names on papers talking about gender "trends" such. I should be collecting the trophies of privileged white dudes with fragile egos I'm being blocked by. Pretty soon, I'll have the whole #TESCREAL bundle. dl.acm.org/doi/fullHtml/1…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
"Why would you, a CEO or executive at a high-profile technology company...proclaim how worried you are about the product you are building and selling?
Answer: If apocalyptic doomsaying about the terrifying power of AI serves your marketing strategy." latimes.com/business/techn…
"OpenAI has worked for years to carefully cultivate an image of itself as a team of hype-proofed humanitarian scientists, pursuing AI for the good of all — which meant that when its moment arrived, the public would be well-primed to receive its apocalyptic AI proclamations...
credulously, as scary but impossible-to-ignore truths about the state of technology."
This is why I was so angry when they were announced as such in 2015.
BTW remember that "the letter" listed eugenics as only a "potentially catastrohpic" event, if we note in the footnote of our statement. This shouldn't be a surprise to anyone who knows anything about the institute that produced "the letter" & longermism. dair-institute.org/blog/letter-st…
[1] We note that eugenics, a very real and harmful practice with roots in the 19th century and running right through today, is listed in footnote 5 of the letter as an example of something that is also only "potentially catastrophic."
"...he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust)...as “mere ripples” when viewed from “the perspective of humankind as a whole.” currentaffairs.org/2021/07/the-da…
On Tuesday...the Future of Life Institute published a letter asking for a six-month minimum moratorium on "training AI systems more powerful than GPT-4," signed by more than 2,000 people, including Turing award winner Yoshua Bengio & one of the world’s richest men, Elon Musk.
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media,
The very first citation in this stupid letter is to our #StochasticParrots Paper,
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"
EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."
They basically say the opposite of what we say and cite our paper?
They are SO incredibly manipulative it makes my blood boil. We see what you've been hyping up, who you put your $10B in, and what they say. Why do the people who create your product talk about how people are stochastic parrots as well?
And your proteges that you're putting $10B into keep on making this point, and then continues to talk about colonizing the cosmos, utopia and how in the next 5 years their tool will read legal documents and medical documents and understand them and such?
"After a conference on AI at the Pontifical Academy Of Science in Rome, discussing with some friends (among them Aimee van Wynsberghe), we argued that the first and foremost AI bias is its name." by Stefano Quintarelli.
"It induces analogies that have limited adhrence to reality and it generates infinite speculations (some of them causing excessive expectations and fears)...Because of this misconception, we proposed we should drop the usage of the term “Artificial Intelligence...”
"and adopt a more appropriate and scoped-limited terminology for these technologies which better describe what these technologies are: Systematic Approaches to Learning Algorithms and Machine Inferences."