This is so painful to watch. @60Minutes and @sundarpichai working in concert to heap on the #AIHype. Partial transcript (that I just typed up) and reactions from me follow:
@60Minutes @sundarpichai Reporter: "Of the AI issues we walked about, the most mysterious is called 'emergent properties'. Some AI systems are teaching themselves skills that they weren't expected to have."

"Emergent properties" seems to be the respectable way of saying "AGI". It's still bullshit.

>>
As @mmitchell_ai points out (read her whole thread; it's great) if you create ignorance about the training data, of course system performance will be surprising.



>>
@mmitchell_ai Reporter: "How this happens is not well understood. For example, one Google AI program adapted on its own after it was prompted in the language of Bangladesh, which it was not trained to know."

Is there Bangla in the training data? Of course there is:

>>
@mmitchell_ai Unidentified interviewee: "We discovered that with very few amounts of prompting in Bengali, it can now translate all of Bengali."

What does "all of Bengali" actually mean? How was this tested?

>>
Later in the clip @sundarpichai says: "There is an aspect of this which we call, all of us in the field, call it as a black box. You know, you don't fully understand, and you can't quite tell why it said this or why it got wrong. [...]"

>>
@sundarpichai Reporter: "You don't fully understand how it works, and yet you've turned it loose on society?"

Pichai: "Let me put it this way: I don't think we fully understand how a human mind works, either."

Did you catch that rhetorical sleight of hand?

>>
@sundarpichai Why would our (I assume, scientific) understanding of human psychology or neurobiology be relevant here? The reporter asked why a company would be releasing systems it doesn't understand. Are humans something that companies "turn loose on" society? (Of course not.)

>>
The rhetorical move @sundarpichai is making here invites the listener to imagine Bard as something like a person, whose behavior we have to live with or maybe patiently train to be better. IT. IS. NOT.

>>
@sundarpichai More generally, any time a AI booster makes this move ("we don't understand humans either") they're either trying to evade accountability or trying to sell their system as some mysterious, magical, autonomous being. Reporters should recognize this and PUSH BACK.

>>
@sundarpichai Still later in the clip, regarding a short story that Bard produced, which the reporter found moving, he asks: "How did it do all of those things if it's just trying to figure out what the next word is?"

>>
Pichai responds: "I've had those experiences talking with Bard as well. There are two views of this. You know there are a set of people who view this as Look, these are just algorithms. It's just repeating what its seen online.
>>
Pichai cont: "Then there's the view where these algorithms are showing emergent properties: to be creative, to reason, to plan, and so on, right? And personally, I think we need to be, we need to approach this with humility."

>>
You know what approaching this with humility would mean @sundarpichai ? It would mean not talking about "emergent properties" which is really a dog whistle for "AGI" which in turn is code for "we created a god!" (Or maybe "We are god, creating life").

>>
@sundarpichai Approaching this humility would mean not putting out unscoped, untested systems (h/t @timnitGebru ) and just expecting the world to deal. It would mean taking into consideration the needs and experiences of those your tech impacts.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender@dair-community.social on Mastodon

@emilymbender@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Apr 3
To all those folks asking why the "AI safety" and "AI ethics" crowd can't find common ground --- it's simple: The "AI safety" angle, which takes "AI" as something that is to be "raised" to be "aligned" with actual people is anathema to ethical development of the technology.
>>
#AIhype isn't the only problem, for sure, but it is definitely a problem and one that exacerbates others. If LLMs are maybe showing the "first sparks of AGI" (they are NOT) then it's easier to sell them as reasonable information access systems (they are NOT).

>>
If (even) the people arguing for a moratorium on AI development do so bc they ostensibly fear the "AIs" becoming too powerful, they are lending credibility to every politician who wants to gut social services by having them allocated by "AIs" that are surely "smart" and "fair".>>
Read 7 tweets
Mar 29
Okay, so that AI letter signed by lots of AI researchers calling for a "Pause [on] Giant AI Experiments"? It's just dripping with #Aihype. Here's a quick rundown.

>>
First, for context, note that URL? The Future of Life Institute is a longtermist operation. You know, the people who are focused on maximizing the happiness of billions of future beings who live in computer simulations.

futureoflife.org/open-letter/pa…

>>
For some context, see:

aeon.co/essays/why-lon…

So that already tells you something about where this is coming from. This is gonna be a hot mess.

>>
Read 28 tweets
Mar 27
Ugh -- I'm seeing a lot of commentary along the lines of "'stochastic parrot' might have been an okay characterization of previous models, but GPT-4 actually is intelligent."

Spoiler alert: It's not. Also, stop being so credulous.

>>
(Some of this I see because it's tweeted at me, but more of it comes to me by way of the standing search I have on the phrase "stochastic parrots" and its variants. The tweets in that column have been getting progressively more toxic over the past couple of months.)

>>
What's particularly galling about this is that people are making these claims about a system that they don't have anywhere near full information about. Reminder that OpenAI said "for safety" they won't disclose training data, model architecture, etc.



>>
Read 4 tweets
Mar 23
Remember when you went to Microsoft for stodgy but basically functional software and the bookstore for speculative fiction?

arXiv may have been useful in physics and math (and other parts of CS) but it's a cesspool in "AI"—a reservoir for hype infections

arxiv.org/abs/2303.12712
From the abstract of this 154 page novella: "We contend that (this early version of) GPT-4 is part of a new cohort of LLMs [...] that exhibit more general intelligence than previous AI models. We discuss the rising capabilities and implications of these models."

>>
And "We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting."

>>
Read 6 tweets
Mar 21
More 🔥🔥🔥 from the FTC!

ftc.gov/business-guida…

A few choice quotes (but really, read the whole thing, it's great!):

>>
"The FTC Act’s prohibition on deceptive or unfair conduct can apply if you make, sell, or use a tool that is effectively designed to deceive – even if that’s not its intended or sole purpose."

ftc.gov/business-guida…

>>
"Should you even be making or selling it?"
"Are you effectively mitigating the risks?"
"Are you over-relying on post-release detection?"
"Are you misleading people about what they’re seeing, hearing, or reading?"

ftc.gov/business-guida…

>>
Read 5 tweets
Mar 20
Several things that can all be true at once:

1. Open access publishing is important
2. Peer review is not perfect
3. Community-based vetting of research is key
4. A system for by-passing such vetting muddies the scientific information ecosystem
Yes, this is both a subtweet of arXiv and of every time anyone cites an actually reviewed & published paper by just pointing to its arXiv version, just further lending credibility to all the nonsense that people "publish" on arXiv and then race to read & promote.
Shout out to the amazing @aclanthology which provides open access publishing for most #compling / #NLProc venues and to all the hardworking folks within ACL reviewing & looking to improve the reviewing process.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(