Since we've been looking for more things to do, @emilymbender @mmitchell_ai @mcmillan_majora and I wrote a statement about the horrible "letter" on the AI apocalypse, the very first citation of which, was our #StochasticParrots paper.
dair-institute.org/blog/letter-st… screenshot of this link until the end of the image: https://
On Tuesday...the Future of Life Institute published a letter asking for a six-month minimum moratorium on "training AI systems more powerful than GPT-4," signed by more than 2,000 people, including Turing award winner Yoshua Bengio & one of the world’s richest men, Elon Musk.
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media,
these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence." Those hypothetical risks are the focus of a dangerous ideology called longtermism
that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities,
2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
While we are not surprised to see this type of letter from a longtermist organization like the Future of Life Institute, which is generally aligned with a vision of the future in which we become radically enhanced posthumans, colonize space, & create trillions of digital people,
we are dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received. It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a "flourishing" or
"potentially catastrophic" future. Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media.
This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders.

What we need is regulation that enforces transparency.
Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures.
The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.
While we agree that "such decisions must not be delegated to unelected tech leaders," we also note that such decisions should not be up to the academics experiencing an "AI summer," who are largely financially beholden to Silicon Valley.
Those most impacted by AI systems, the immigrants subjected to "digital border walls," the women being forced to wear specific clothing, the workers experiencing PTSD while filtering outputs of generative systems, the artists seeing their work stolen for corporate profit,
and the gig workers struggling to pay their bills should have a say in this conversation.

Contrary to the letter’s narrative that we must "adapt" to a seemingly pre-determined technological future & cope "with the dramatic economic & political disruptions
(especially to democracy) that AI will cause," we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate. We should be building machines that work for us,
instead of "adapting" society to be machine readable and writable. The current race towards ever larger "AI experiments" is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive.
The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.

It is indeed time to act: but the focus of our concern should not be imaginary "powerful digital minds."
Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @timnitGebru@dair-community.social on Mastodon

@timnitGebru@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @timnitGebru

Mar 31
BTW remember that "the letter" listed eugenics as only a "potentially catastrohpic" event, if we note in the footnote of our statement. This shouldn't be a surprise to anyone who knows anything about the institute that produced "the letter" & longermism.
dair-institute.org/blog/letter-st…
[1] We note that eugenics, a very real and harmful practice with roots in the 19th century and running right through today, is listed in footnote 5 of the letter as an example of something that is also only "potentially catastrophic."
"...he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust)...as “mere ripples” when viewed from “the perspective of humankind as a whole.”
currentaffairs.org/2021/07/the-da…
Read 4 tweets
Mar 30
The very first citation in this stupid letter is to our #StochasticParrots Paper,

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"

EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."

They basically say the opposite of what we say and cite our paper?
The rest of the people they cite in footnote #1 are all longtermists. Again, please read
currentaffairs.org/2021/07/the-da…
Read 10 tweets
Mar 6
In this @60Minutes, Microsoft chairman Brad Smith says "its a screen its not a person."

Then why do you keep on misleading people as to the capabilities of your products?

Why did you endorse this manifesto by Sam Altman "planning for AGI and beyond"?
cbsnews.com/news/chatgpt-l…
They are SO incredibly manipulative it makes my blood boil. We see what you've been hyping up, who you put your $10B in, and what they say. Why do the people who create your product talk about how people are stochastic parrots as well?
And your proteges that you're putting $10B into keep on making this point, and then continues to talk about colonizing the cosmos, utopia and how in the next 5 years their tool will read legal documents and medical documents and understand them and such?
Read 12 tweets
Mar 4
"After a conference on AI at the Pontifical Academy Of Science in Rome, discussing with some friends (among them Aimee van Wynsberghe), we argued that the first and foremost AI bias is its name." by Stefano Quintarelli.

h/t @emilymbender
blog.quintarelli.it/2019/11/lets-f…
"It induces analogies that have limited adhrence to reality and it generates infinite speculations (some of them causing excessive expectations and fears)...Because of this misconception, we proposed we should drop the usage of the term “Artificial Intelligence...”
"and adopt a more appropriate and scoped-limited terminology for these technologies which better describe what these technologies are: Systematic Approaches to Learning Algorithms and Machine Inferences."
Read 5 tweets
Mar 2
These are from a public discord for the for LAION project. Take a look at the discussion by professors based out of the esteemed MILA, like @irinarish.

1) What "therapy data" is this LLM you're talking about fine-tuned on? Irina  I replied to that guy. It is sad to see some people IIrina  Perhaps this is something to discuss with @Emad and hIrina Rish  We should definitely apply LLM we are training f
2) You see this hugely unethical thing and are like yes we need to do this and stability.ai needs to help us with PR and legal issues.

Do you ever go through IRB?
3) Ahh yes the "woke" crowd that "attacked Yann LeCun" "triggered by BLM." I am so happy that I don't have to be anywhere near MILA. How can any Black person survive? I've heard from a few who've been telling me how awful it is. No wonder with this shit.
Read 5 tweets
Mar 1
"Musk’s Twitter is simply a further manifestation of how self-regulation by tech companies will never work, & it highlights the need for genuine oversight...Things have to change."
"The Algorithmic Accountability Act, the Platform Accountability & Transparency Act,...,the [EU]'s Digital Services & AI Acts...demonstrate how legislation could create a pathway for external parties to access source code & data to ensure compliance with antibias requirements."
"Companies would have to statistically prove that their algorithms are not harmful, in some cases allowing individuals from outside their companies an unprecedented level of access to conduct source-code audits, similar to the work my team was doing at Twitter."
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(