Discover and read the best of Twitter Threads about #AIhype

Most recents (17)

Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician".
vice.com/en/article/jgp…

/1
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.

/2
I have no objections to performance art in general, and something that helps the general public grasp the absurdity of claims of "AI" and reframe what these systems should be used for seems valuable.

/3
Read 14 tweets
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...

…mail-com.offcampus.lib.washington.edu/business/rob-m…

>>
First off, it's boring. I wouldn't have made it past the first couple of paragraphs, except the reporter had talked to me so I was (increasingly morbidly) curious how my words were being used.

>>
The second paragraph (and several others) is actually the output of their LLM. This is flagged in the subhead and in the third paragraph. I still think it's terrible journalistic practice.

>> Screencap: "Before Aid...Screecap: "could becom...
Read 17 tweets
In the effort curb misunderstanding and #AIHype on the topic of language models (LMs), we're circulating a tweet thread to offer a baseline understanding of how systems such as OpenAI's GPT-3 work to deliver sequences of human-like text in response to prompts. /1
We're directing this thread especially to our humanist readers who may be only peripherally aware of the increased commercialization of (and hype about) "artificial intelligence" (AI) text generators.
NB: AI is itself a slippery term: we use it w/ caution.
/2
The best known of these models is OpenAI’s GPT-3, which is licensed by Microsoft. Students can use them to generate human-like text by paying OpenAI directly or subscribing to subsidiary “apps." They may also access less powerful but free models for text generation. /3
Read 37 tweets
Hi folks -- time for another #AIhype take down + analysis of how the journalistic coverage relates to the underlying paper. The headline for today's lesson:

fiercebiotech.com/medtech/ai-spo…

/1 Screencap of headline of th...
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).

/2
Let's pause to once again note the use of "AI" in this way suggests that "artificial intelligence" is a thing that exists. Always useful to replace that term with "mathy math" or SALAMI for a reality check.

/3
Read 26 tweets
Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.

Which makes me wonder >>
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?

>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?

>>
Read 5 tweets
Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use case is to alert the cops so they can respond.

>>
Q1: Is it plausible that a system could give the purported output (time & location of gunshot) given the inputs (audio recordings from surveillance microphones deployed in a neighborhood)?

>>
A1: At a guess, such a system could detect loud noises that include gunshots (but lots of other things) and might be able to provide some location information (which mics picked it up?) but keep in mind that cityscapes provide lots of opportunities for echos...

>>
Read 12 tweets
This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment/education in spotting #AIhype, I present a brief annotated reading:

theatlantic.com/technology/arc…

/1
Straight out of the gate, he's not just comparing "AI" to "miracles" but flat out calling it one and quoting Google & Tesla (ex-)execs making comparisons to "God" and "demons".

/2 Screencap from linked article: "Miracles can be perplex
This is not the writing of someone who actually knows what #NLProc is. If you use grammar checkers, autocorrect, online translation services, web search, autocaptions, a voice assistant, etc you use NLP technology in everyday life. But guess what? NLP isn't a subfield of "AI".
/3 Screencap, same article: "Early artificial intelligence
Read 25 tweets
*sigh* once again relegated to the critics' box. The framing in this piece leans so hard into the victims (no one believed us) persevering (we showed 'em!) narrative of the deep learning folks. #AIhype ahead:

venturebeat.com/ai/10-years-on…
"Success draws critics", uh nope. I'm not in this conversation because of whatever success deep learning has had. I'm in it because of the unfounded #AIhype and the harms being carried out in the name of so-called "AI".

>> Screencap from article link...
"huge progress ... in some key applications like computer vision and language" --- uh "language" isn't an application, TYVM.

And I am not trying to "take away" any actual progress (e.g. improved ASR, MT). I'm only taking issue with overclaims.

>> Screencap, same article: &q...
Read 12 tweets
Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it is that these folks are utterly failing at ceding any power & how completely mismatched "optimization" is from the goals of doing actual good in the world.
>>
Just a few random excerpts, because it was so painful to read...

>>
"Oh noes! We have too much money, and not enough actual need in today's world."

First: This is such an obvious way in which insisting on only funding the MOST effective things is going to fail. (Assuming that is even knowable.)

>> Screencap reading: "EA...
Read 19 tweets
Some reflections on media coverage of tech/science/research. It seems to me that there are broadly speaking two separate paths of origin for these stories: In one, the journalist sees something that they think the public should be informed of, and digs into the scholarship.

>>
In the other, the researchers have something they want to draw the world's attention to. But there are two subcases here:

Researchers (usually in academia) who see a need for the public to be informed, either acutely (ppl need this info NOW) or long-term (science literacy).
>>
Subcase 2b: PR orgs (usually in industry) want the public to know about their cool research, because it serves as positive marketing for a company, halo effect of competence, etc etc.

>>
Read 13 tweets
This story (by @nitashatiku) is really sad, and I think an important window into the risks of designing systems to seem like humans, which are exacerbated by #AIhype:

washingtonpost.com/technology/202…
@nitashatiku As I am quoted in the piece: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them”

>>
@nitashatiku But it isn't only or even primarily about individual humans learning how to conceptualize what these systems are doing---we also need both regulation and design practices around transparency.
Read 3 tweets
More #AIhype #PSEUDOSCI #SnakeOil

We can detect X from Y! In this case, we're to believe X is depression (and Y is the public Twitter data), but actually >>
It hasn't been tested against actual mental health data, but rather a scraped dataset with labels inferred based on what's in (or not in) the tweets.



>>
I'm beginning to wonder what kind of training journalists receive regarding covering technology. Surely there are some best practices being taught in journalism programs to avoid falling for this bs?
Read 3 tweets
I find this reporting infuriating, so I'm going to use it to create a mini-lesson in detecting #AIhype.

If you're interested in following this lesson, please read the article, making note of what you think sounds exciting and what makes you skeptical.

nytimes.com/2022/04/05/tec…
You read it and/or hit a paywall and still want my analysis? Okay, here we go:
First, let's note the good intentions. The journalist reports that mental health services are hard to access (because insufficient, but maybe not only that), and it would be good to have automated systems that help out.
Read 19 tweets
Yes, this is great! I'm sorry we didn't find your paper while writing ours. (cc @chirag_shah)

A few favorite quotes & one quibble:
@chirag_shah Potthast et al (from @webis_de) suggest a standard disclaimer on direct answer responses, which is very well put:

“This answer is not necessarily true. It just fits well to your question.”

dl.acm.org/doi/abs/10.114…

>>

>>
@chirag_shah @webis_de Also Potthast et al: "As no actual conversations are currently supported by conversational search agents, every query is an ad hoc query that is met with one single answer."

No. Actual. Conversations.

There's a whole study to be done on the perils of aspirational tech names.>>
Read 6 tweets
💯 this! Overfunding is bad for the overfunded fields, bad for researchers in the overfunded fields, and bad for fields left to starve, and bad for society as a result of both of those.

>>
Re bad for the field, see @histoftech 's tweet and the tweet by @ChristophMolnar they are QT-ing.

>>
Re bad for researchers in the overfunded fields, see all the discourse around how do we keep up with arXiv??



>>
Read 11 tweets
Thanks for the ping, @michaelbrundage

I don't think there's anything specific to LLMs here. Rather, this is endemic to the way ML is applied these days:
1. Someone creates a dataset & describes it as a benchmark for some skill. Sometimes, the skill is well-scoped & the benchmark represents a portion of it reasonably (eg ASR for news in X language). Others, the skill is bogus (all the physiognomy tasks; IQ from text samples; &c).
2. ML researchers use the benchmarks to test different approaches to learning. This can be done well (for well-scoped tasks): which algorithms are suited to which tasks and why? (Requires error analysis, and understanding the task as well as the algorithm.)
Read 21 tweets
Wow this article covers a lot of ground! Seems like a good way for folks interested in "AI ethics" and what that means currently to get a quick overview.

Draws on work by @mmitchell_ai @timnitGebru @rajiinio @jovialjoy @mathbabedotorg and many others.
>>
zdnet.com/article/ethics…
A few pull quotes & comments:
"Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.

That questioning is made all the more urgent because of scale."
Read 20 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!