On Tuesday...the Future of Life Institute published a letter asking for a six-month minimum moratorium on "training AI systems more powerful than GPT-4," signed by more than 2,000 people, including Turing award winner Yoshua Bengio & one of the world’s richest men, Elon Musk.
While there are a number of recommendations in the letter that we agree with (and proposed in our 2021 peer-reviewed paper known informally as "Stochastic Parrots"), such as "provenance and watermarking systems to help distinguish real from synthetic" media,
these are overshadowed by fearmongering and AI hype, which steers the discourse to the risks of imagined "powerful digital minds" with "human-competitive intelligence." Those hypothetical risks are the focus of a dangerous ideology called longtermism
that ignores the actual harms resulting from the deployment of AI systems today. The letter addresses none of the ongoing harms from these systems, including 1) worker exploitation and massive data theft to create products that profit a handful of entities,
2) the explosion of synthetic media in the world, which both reproduces systems of oppression and endangers our information ecosystem, and 3) the concentration of power in the hands of a few people which exacerbates social inequities.
While we are not surprised to see this type of letter from a longtermist organization like the Future of Life Institute, which is generally aligned with a vision of the future in which we become radically enhanced posthumans, colonize space, & create trillions of digital people,
we are dismayed to see the number of computing professionals who have signed this letter, and the positive media coverage it has received. It is dangerous to distract ourselves with a fantasized AI-enabled utopia or apocalypse which promises either a "flourishing" or
"potentially catastrophic" future. Such language that inflates the capabilities of automated systems and anthropomorphizes them, as we note in Stochastic Parrots, deceives people into thinking that there is a sentient being behind the synthetic media.
This not only lures people into uncritically trusting the outputs of systems like ChatGPT, but also misattributes agency. Accountability properly lies not with the artifacts but with their builders.
What we need is regulation that enforces transparency.
Not only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures.
The onus of creating tools that are safe to use should be on the companies that build and deploy generative systems, which means that builders of these systems should be made accountable for the outputs produced by their products.
While we agree that "such decisions must not be delegated to unelected tech leaders," we also note that such decisions should not be up to the academics experiencing an "AI summer," who are largely financially beholden to Silicon Valley.
Those most impacted by AI systems, the immigrants subjected to "digital border walls," the women being forced to wear specific clothing, the workers experiencing PTSD while filtering outputs of generative systems, the artists seeing their work stolen for corporate profit,
and the gig workers struggling to pay their bills should have a say in this conversation.
Contrary to the letter’s narrative that we must "adapt" to a seemingly pre-determined technological future & cope "with the dramatic economic & political disruptions
(especially to democracy) that AI will cause," we do not agree that our role is to adjust to the priorities of a few privileged individuals and what they decide to build and proliferate. We should be building machines that work for us,
instead of "adapting" society to be machine readable and writable. The current race towards ever larger "AI experiments" is not a preordained path where our only choice is how fast to run, but rather a set of decisions driven by the profit motive.
The actions and choices of corporations must be shaped by regulation which protects the rights and interests of people.
It is indeed time to act: but the focus of our concern should not be imaginary "powerful digital minds."
Instead, we should focus on the very real and very present exploitative practices of the companies claiming to build them, who are rapidly centralizing power and increasing social inequities.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
BTW remember that "the letter" listed eugenics as only a "potentially catastrohpic" event, if we note in the footnote of our statement. This shouldn't be a surprise to anyone who knows anything about the institute that produced "the letter" & longermism. dair-institute.org/blog/letter-st…
[1] We note that eugenics, a very real and harmful practice with roots in the 19th century and running right through today, is listed in footnote 5 of the letter as an example of something that is also only "potentially catastrophic."
"...he characterizes the most devastating disasters throughout human history, such as the two World Wars (including the Holocaust)...as “mere ripples” when viewed from “the perspective of humankind as a whole.” currentaffairs.org/2021/07/the-da…
The very first citation in this stupid letter is to our #StochasticParrots Paper,
"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research[1]"
EXCEPT
that one of the main points we make in the paper is that one of the biggest harms of large language models, is caused by CLAIMING that LLMs have "human-competitive intelligence."
They basically say the opposite of what we say and cite our paper?
They are SO incredibly manipulative it makes my blood boil. We see what you've been hyping up, who you put your $10B in, and what they say. Why do the people who create your product talk about how people are stochastic parrots as well?
And your proteges that you're putting $10B into keep on making this point, and then continues to talk about colonizing the cosmos, utopia and how in the next 5 years their tool will read legal documents and medical documents and understand them and such?
"After a conference on AI at the Pontifical Academy Of Science in Rome, discussing with some friends (among them Aimee van Wynsberghe), we argued that the first and foremost AI bias is its name." by Stefano Quintarelli.
"It induces analogies that have limited adhrence to reality and it generates infinite speculations (some of them causing excessive expectations and fears)...Because of this misconception, we proposed we should drop the usage of the term “Artificial Intelligence...”
"and adopt a more appropriate and scoped-limited terminology for these technologies which better describe what these technologies are: Systematic Approaches to Learning Algorithms and Machine Inferences."
These are from a public discord for the for LAION project. Take a look at the discussion by professors based out of the esteemed MILA, like @irinarish.
1) What "therapy data" is this LLM you're talking about fine-tuned on?
2) You see this hugely unethical thing and are like yes we need to do this and stability.ai needs to help us with PR and legal issues.
3) Ahh yes the "woke" crowd that "attacked Yann LeCun" "triggered by BLM." I am so happy that I don't have to be anywhere near MILA. How can any Black person survive? I've heard from a few who've been telling me how awful it is. No wonder with this shit.
"Musk’s Twitter is simply a further manifestation of how self-regulation by tech companies will never work, & it highlights the need for genuine oversight...Things have to change."
"The Algorithmic Accountability Act, the Platform Accountability & Transparency Act,...,the [EU]'s Digital Services & AI Acts...demonstrate how legislation could create a pathway for external parties to access source code & data to ensure compliance with antibias requirements."
"Companies would have to statistically prove that their algorithms are not harmful, in some cases allowing individuals from outside their companies an unprecedented level of access to conduct source-code audits, similar to the work my team was doing at Twitter."