Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.

Which makes me wonder >>
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?

>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?

>>
I suppose the other possibility is that they know it's more than that and they've gone all in on the magical thinking that their mathy maths have some mysterious emergent properties.

>>
It would all be just sad if these folks were just entertaining themselves in private, rather than receiving billions of dollars of VC funding, testifying before Congress, etc.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Emily M. Bender

Emily M. Bender Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Oct 3
It's good that @WIRED is covering this and shedding light on the unregulated mess that is the application of chatbots (and other so-called "AI") to mental health services.

However:
@WIRED Surely it must be possible to cover these stories without writing headlines that suggest the "AI" is something that can have agency. No AI is "trying" to do anything.

>> Screencap of header of arti...
Furthermore, somehow this article fails to get into the ENORMOUS risks of the surveillance side of this. How are those companies handling user data? Are they ensuring those conversations only stay on the user's local device (hah, that would be nice)? Who are they selling it to?
Read 7 tweets
Oct 3
Hey hey hey! Mystery AI Hype Theater episode 3 is now up on YouTube!



Some highlights of @alexhanna 's & my discussion of (most of) the rest of "Can Machines Learn How to Behave?"

>>
@alexhanna The bit where we talk about what LaMDA actually does, vs. the hype:


>>
The bit where we talk about how he mangles the history of computing (and I had to decide how to transcribe a growl for the captions):



>>
Read 7 tweets
Oct 2
No, a machine did not testify before congress. It is irresponsible for @jackclarkSF to claim that it did and for @dannyfortson to repeat that claim, with no distance or skepticism in the Sunday Times.

>> Screencap reading: "On Thursday, a machine testified be
@jackclarkSF @dannyfortson Here is what the English verb "testify" means (per @MerriamWebster). 3/4 of these are things that a language model can do: it can't swear an oath, it can't speak from personal knowledge or bear witness, and it can't express a personal conviction.

>> First part of the definition of testify (verb) from https://
@jackclarkSF @dannyfortson @MerriamWebster Those are all category errors: language models are computer programs designed to model the distribution of word forms in text, and nothing more. They don't have convictions or personal knowledge and aren't the sort of entity that can be bound by an oath.

>>
Read 7 tweets
Sep 19
This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment/education in spotting #AIhype, I present a brief annotated reading:

theatlantic.com/technology/arc…

/1
Straight out of the gate, he's not just comparing "AI" to "miracles" but flat out calling it one and quoting Google & Tesla (ex-)execs making comparisons to "God" and "demons".

/2 Screencap from linked article: "Miracles can be perplex
This is not the writing of someone who actually knows what #NLProc is. If you use grammar checkers, autocorrect, online translation services, web search, autocaptions, a voice assistant, etc you use NLP technology in everyday life. But guess what? NLP isn't a subfield of "AI".
/3 Screencap, same article: "Early artificial intelligence
Read 25 tweets
Aug 25
This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of the lack of protections against data collection in our country, & of the mindset of tech solutionism that attempts to remove "failable" humans decision makers
.@UpFromTheCracks 's essay is both a powerful call for the immediate end of family policing and an extremely pointed case study in so many aspects of what gets called #AIethics:

1. What are the potentials for harm from algorithmic decision making?

>>
2. The absolutely essential effects of lived experience and positionality to understanding those harms.
3. The ways in which data collection sets up future harms.

>>
Read 5 tweets
Aug 24
Just to recap the morning so far (admittedly, some of these news stories are from a couple of days ago):
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(