@OpenAI@sama From the get-go this is just gross. They think they are really in the business of developing/shaping "AGI". And they think they are positioned to decide what "benefits all of humanity".
Then @sama invites the reader to imagine that AGI ("if succesfully created") is literally magic. Also, What does "turbocharging the economy" mean, if there is already abundance? More $$$ for the super rich, has to be.
@sama Also, note the rhetorical sleight of hand there. Paragraph 1 has AGI as a hypothetical ("if successfully created") but by para 2 it already is something that "has potential".
But oh noes -- the magical imagined AGI also has downsides! But it is so so tempting and important to create, that we can't not create it. Note the next rhetorical sleight of hand here. Now AGI is an unpreventable future.
What's in fn1? A massive presupposition failure: The GPTs are learning information about word distributions in lots and lots of text + what word patterns are associated with higher scores (from human raters). That's it.
Then a series of principles for how to ensure that AGI is "beneficial". This includes "governance of AGI" as something that is "widely and fairly shared", but I've seen exactly nothing from @OpenAI about or advocating for building shared governance structures.
@OpenAI Meanwhile, "continuously learn and adapt by deploying less powerful versions of the technology" suggests that they think that the various GPTs are "less powerful versions of AGI".
<recordscratch> hang on: did he just say "maximarlly flourish in the universe"? What kind of weirdo longtermist, space-colonizing fantasy is that coming from?
Similarly here, this seems designed to promote the idea that the models they have already put into their API (GPT-2, GPT-3, ChatGPT) are the early stages of "AGI" being "stewarded into existence".
Then there's a glib paragraph about how "most expert predictions have been wrong so far" ending in footnote 2:
Paraphrasing: "Our experts thought we could do this as a non-profit, but then we realized we wanted MOAR MONEY. Also we thought we should just do everything open source but then we decided nah. Also, can't be bothered to even document the systems or datasets."
Hey @OpenAI, I'm speaking to you from 2018 to say: DOCUMENT YOUR DAMN DATASETS. Also, to everyone else: If you don't know what's in it, don't use it.
@OpenAI Okay, back to @sama. "As our systems get closer to AGI" -- here's a false presupposition again. Your system isn't AGI, it isn't a step towards AGI, and yet you're dropping that in as if the reader is just suppose to nod along.
Oh, and did you all catch that shout out to xrisk? Weirdo longertermist fantasy indeed.
As I said in my thread yesterday, I wish I could just laugh at these people, but unfortunately they are attempting (and I think succeeding) to engage the discussion about regulation of so-called AI systems.
What's needed is regulation about: how data can be collected and used, transparency of datasets, models and the deployment of text/image generation systems, recourse and contestability of any automated decision making, etc.
Talking about text synthesis machines as if they were "AI" muddies the waters and hampers effective discussions about data rights, transparency, protection from automated decision systems, surveillance, and all the rest of the pressing issues.
The problem isn't regulating "AI" or future "AGI". It's protecting individuals from corporate and government overreach using "AI" to cut costs and or deflect accountability.
The contradiction in these next 2 paras is stunning: We think you should be able to do whatever you want with our systems, bc "diversity of ideas" but also we think we can align the systems with "human values". So, assholes can create fake revenge porn, but that's okay because-?
LOLOL -- calling something a "ratio" doesn't make measurable or, ahem, real.
[This is exhausting, but I started it. Might as well finish.]
Wait what -- now they're talking seriously about "late-stage AGI development"?
Here's a bunch of promises about future oversight by unnamed independent auditors and also "major world governments" (who counts as major? who decides?). Also, how about just DOCUMENTING YOUR DAMN DATA for everyone to see?
"Continuum of intelligence" is gross, not least for the suggestions of ableism, eugenics, transhumanism etc. But also "rate of progress [of] the past decade" -?Progress towards what? Ever larger carbon footprints? More plausible fake text?
And, more to the pt: There are harms NOW: to privacy, theft of creative output, harms to our information ecosystems, and harms from the scaled reproduction of biases. An org that cared about "benefitting humanity" wouldn't be developing/disseminating tech that does those things.
No, they don't want to address actual problems in the actual world (which would require ceding power). They want to believe themselves gods who can not only create a "superintelligence" but have the beneficence to do so in a way that is "aligned" with humanity.
/fin
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.
>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"
>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."