Okay, I read it so you don't have to. Here's a reaction thread to @openAI / @sama 's blog post from Friday "Planning for AGI and beyond":

openai.com/blog/planning-…
@OpenAI @sama From the get-go this is just gross. They think they are really in the business of developing/shaping "AGI". And they think they are positioned to decide what "benefits all of humanity". Screencap: "Our mission is to ensure that artificial ge
Then @sama invites the reader to imagine that AGI ("if succesfully created") is literally magic. Also, What does "turbocharging the economy" mean, if there is already abundance? More $$$ for the super rich, has to be. Screencap: "If AGI is successfully created, this techno
@sama Also, note the rhetorical sleight of hand there. Paragraph 1 has AGI as a hypothetical ("if successfully created") but by para 2 it already is something that "has potential". Screencap: "If AGI is successfully created, this techno
But oh noes -- the magical imagined AGI also has downsides! But it is so so tempting and important to create, that we can't not create it. Note the next rhetorical sleight of hand here. Now AGI is an unpreventable future. Screencap: "On the other hand, AGI would also come with
What's in fn1? A massive presupposition failure: The GPTs are learning information about word distributions in lots and lots of text + what word patterns are associated with higher scores (from human raters). That's it. Screencap: "We seem to have been given lots of gifts re
Then a series of principles for how to ensure that AGI is "beneficial". This includes "governance of AGI" as something that is "widely and fairly shared", but I've seen exactly nothing from @OpenAI about or advocating for building shared governance structures. Screencap: "1. We want AGI to empower humanity to maxim
@OpenAI Meanwhile, "continuously learn and adapt by deploying less powerful versions of the technology" suggests that they think that the various GPTs are "less powerful versions of AGI".
<recordscratch> hang on: did he just say "maximarlly flourish in the universe"? What kind of weirdo longtermist, space-colonizing fantasy is that coming from?
Similarly here, this seems designed to promote the idea that the models they have already put into their API (GPT-2, GPT-3, ChatGPT) are the early stages of "AGI" being "stewarded into existence". Screencap: "There are several things we think are imporScreencap: "A gradual transition gives people, policyma
Then there's a glib paragraph about how "most expert predictions have been wrong so far" ending in footnote 2: Screencap: "2. For example, when we first started OpenA
Paraphrasing: "Our experts thought we could do this as a non-profit, but then we realized we wanted MOAR MONEY. Also we thought we should just do everything open source but then we decided nah. Also, can't be bothered to even document the systems or datasets."
Hey @OpenAI, I'm speaking to you from 2018 to say: DOCUMENT YOUR DAMN DATASETS. Also, to everyone else: If you don't know what's in it, don't use it.

Source: aclanthology.org/Q18-1041.pdf Screencap from Bender & Friedman 2018: "These two recom
@OpenAI Okay, back to @sama. "As our systems get closer to AGI" -- here's a false presupposition again. Your system isn't AGI, it isn't a step towards AGI, and yet you're dropping that in as if the reader is just suppose to nod along. Screencap: "As our systems get closer to AGI, we are be
Oh, and did you all catch that shout out to xrisk? Weirdo longertermist fantasy indeed.
As I said in my thread yesterday, I wish I could just laugh at these people, but unfortunately they are attempting (and I think succeeding) to engage the discussion about regulation of so-called AI systems. Screencap: "In particular, we think it’s important th
What's needed is regulation about: how data can be collected and used, transparency of datasets, models and the deployment of text/image generation systems, recourse and contestability of any automated decision making, etc.
Talking about text synthesis machines as if they were "AI" muddies the waters and hampers effective discussions about data rights, transparency, protection from automated decision systems, surveillance, and all the rest of the pressing issues.
The problem isn't regulating "AI" or future "AGI". It's protecting individuals from corporate and government overreach using "AI" to cut costs and or deflect accountability.
The contradiction in these next 2 paras is stunning: We think you should be able to do whatever you want with our systems, bc "diversity of ideas" but also we think we can align the systems with "human values". So, assholes can create fake revenge porn, but that's okay because-? Screencap: "The “default setting” of our products w
LOLOL -- calling something a "ratio" doesn't make measurable or, ahem, real. Screencap: "Importantly, we think we often have to make
[This is exhausting, but I started it. Might as well finish.]
Wait what -- now they're talking seriously about "late-stage AGI development"? Screencap: "In addition to these three areas, we have a
Here's a bunch of promises about future oversight by unnamed independent auditors and also "major world governments" (who counts as major? who decides?). Also, how about just DOCUMENTING YOUR DAMN DATA for everyone to see? Screencap: "We think it’s important that efforts like
"Continuum of intelligence" is gross, not least for the suggestions of ableism, eugenics, transhumanism etc. But also "rate of progress [of] the past decade" -?Progress towards what? Ever larger carbon footprints? More plausible fake text? Screencap: "The first AGI will be just a point along th
And, more to the pt: There are harms NOW: to privacy, theft of creative output, harms to our information ecosystems, and harms from the scaled reproduction of biases. An org that cared about "benefitting humanity" wouldn't be developing/disseminating tech that does those things.
No, they don't want to address actual problems in the actual world (which would require ceding power). They want to believe themselves gods who can not only create a "superintelligence" but have the beneficence to do so in a way that is "aligned" with humanity.

/fin Screencap: "Successfully transitioning to a world with

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender@dair-community.social on Mastodon

@emilymbender@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Feb 23
The NYTimes (famous for publishing transphobia) often has really bad coverage of tech, but I appreciate this opinion pice by Reid Blackman:

nytimes.com/2023/02/23/opi…

>>
Blackman uses Microsoft's own AI Principles to clearly explain why BingGPT shouldn't be released into the world. He's right to praise Microsoft's principles and also spot on in his analysis of how the development of BingGPT violates them.

>>
And, as Blackman argues, this whole episode shows how self-regulation isn't going to suffice. Without regulation providing guardrails, the profit motive incentivizes a race to the bottom --- even in cases of clear risk to longer term reputation (and profit).

>>
Read 5 tweets
Feb 16
The @nytimes, in addition to famously printing lots of transphobic non-sense (see the brilliant call-out at nytletter.com), also decided to print an enormous collection of synthetic (i.e. fake) text today.

>>
@nytimes Why @nytimes and @kevinroose thought their readers would be interested in reading all that fake text is a mystery to me --- but then again (as noted) this is the name publication that thinks its readers benefit from reading transphobic trash, so ¯\_(ツ)_/¯

>>
@nytimes @kevinroose Beyond the act of publishing chatbot (here BingGPT) output as if it were worth anyone's time, there are a few other instances of #AIHype in that piece that I'd like to point out.

>>
Read 9 tweets
Feb 16
Hey journalists -- I know your work is extremely hectic and I get it. I understand that you might make plans for something and then have to pivot to an entirely different topic. That's cool.

BUT:
If you ask an expert for their time same day at a specific time, and they say yes, and then you don't reply, even though said expert has made time for you -- that is NOT OK.
Engaging with the media is actually an additional layer of work over everything else that I do (including the work that builds the expertise that you are interviewing me about). I'm willing to do it because I think it's important.
Read 6 tweets
Feb 16
TFW an account with 380k followers tweets out a link to a fucking arXiv paper claiming that "Theory of Mind May Have Spontaneously Emerged in Large Language Models".

#AIHype #MathyMath Screencap of twitter profile of @KrikDBorne. Header image inScreencap of tweet reading "Theory of Mind May Have Spo
That feeling is despair and frustration that researchers at respected institutions would put out such dreck, that it gets so much attention these days, and that so few people seem to be putting any energy into combatting it.

>>
NB: The author of that arXiv (= NOT peer reviewed) paper is the same asshole behind the computer vision gaydar study from a few years ago.

>>
Read 5 tweets
Feb 16
Started listening to an episode about #ChatGPT on one of my favorite podcasts --- great hosts, usually get great guests and was floored by how awful it was.

>>
Guest blythely claims that large language models learn language like kids to (and also had really uninformed opinions about child language acquisition) ... and that they end up "understanding" language.

>>
The guest also asserted that the robots.txt "soft standard" was an effective way to prevent pages from being crawled (as if all crawlers respect that) & that surely something is already available to do the same to block creative content from getting appropriated as training data.
Read 6 tweets
Feb 7
Strap in folks --- we have a blog post from @sundarpichai at @google about their response to #ChatGPT to unpack!

blog.google/technology/ai/…

#MathyMath #AIHype
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!

There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".

>> Screencap: "AI is the most profound technology we are w
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?

#AIHype #InAweOfScale

>> Screencap: "Since then we’ve continued to make invest
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(