I find this reporting infuriating, so I'm going to use it to create a mini-lesson in detecting #AIhype.

If you're interested in following this lesson, please read the article, making note of what you think sounds exciting and what makes you skeptical.

nytimes.com/2022/04/05/tec…
You read it and/or hit a paywall and still want my analysis? Okay, here we go:
First, let's note the good intentions. The journalist reports that mental health services are hard to access (because insufficient, but maybe not only that), and it would be good to have automated systems that help out.
Also, the reporter notes that it would be diagnostically helpful to have externally observable ("objective") indicators of mental health. Not my field, but this is believable to me.
And now we are squarely in the ML techno-solutionism danger zone: It's established that it would be beneficial to have something that can do X with only Y input, but not that it's actually possible to do X with only Y input.
On the other hand, you can always train an ML system that takes ys (elements of Y) as input and gives xs (elements of X) as output and thus LOOKS LIKE it's doing X with only Y input.
So what is the evidence that anything could do X (provide mental health diagnoses) with only Y (voice recordings) input? Our emotional state (depression, anxiety) can affect our speech: Screen cap from linked arti...
So, there might be some signal there. But is it enough to do anything reliable? Under what conditions? (Compare e.g. what needs to be true for accurate blood pressure readings, and the fact that even physiological medical tech is insufficiently tested on non-white, non-men.)
But never fear! The AI can pick up all the details! 🙃🙃🙃 Screen cap from linked arti...
So, we're being asked to believe, here, that not only is it possible to do the thing that we wish could be done, it's possible because "AI" is supposedly better than humans at doing this thing (modeling human emotional states).
Note also the jump from maybe-there's-evidence for anxiety and depression being observable via voice to also diagnosing schizophrenia and PTSD.
It makes me sad to see domain experts being drawn in in this way. I can't tell if Dr. Bentley is being quoted out of context, or if she actually believes the hype. Screen cap from linked arti...
Getting down to the level of individual sentences in this article for a bit, note the work that "perfectly" is doing here. This makes it sound like "AI" is some pre-existing natural phenomenon which just happens to be a good match for this problem. Screen cap from linked arti...
Also, that em-dash makes it hard to tell if this is a bald assertion or part of what some AI researchers believe or what they believe might be the case. I'm guessing the average reader will miss that nuance and read it as a bald assertion. Screen cap from linked arti...
Another type of #AIhype shows up subtly: the framing "little human oversight" which suggests autonomy on the part of the system. So-called "AI" systems are only artifacts, but the more they are hyped as autonomous agents, the easier it is to believe that they can do magic. Screen cap from linked arti...
The article does point out that for "mainstream" use, the technology would have to be tested to medical standards. Quoting Dr. Bentley again: Screen cap from linked arti...
But at the same time, the article is referring to apps that are already commercially available---the journalist tested two of them on herself. So I guess "mainstream" really means only "within the medical establishment" here?
And this brings me to my final point: dual use. If these systems are out there, purporting to measure aspects of mental health (one is called Mental Fitness, ffs) on the basis of short recordings, who else is going to use them, on whom, and to what ends?
I want to see all reporting on applications of so-called "AI" asking these questions. Can it be used for surveillance? Can it be used for stalking? How might the claims being made by the developers shape the way it could be used?

/fin

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender@dair-community.social on Mastodon

@emilymbender@dair-community.social on Mastodon Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Feb 29
It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government.

... and a short thread because there is so much awfulness in this one article.
/1


ft.com/content/f2ae55…
Screencap: "UK ministers are piloting the use of generative artificial intelligence to analyse responses to government consultations and write draft answers to parliamentary questions.  Oliver Dowden, the deputy prime minister, will on Thursday unveil tools that the AI “crack squad” at the heart of Whitehall is trialling with a view to wider rollouts across central departments and public services."
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2 Screencap: "The AI tools include using government-hosted versions of ChatGPT and a mix of open-source AI models securely hosted in-house to draft preliminary responses to questions to ministers submitted by MPs and to freedom of information requests.  The drafts would always be checked by a human civil servant and the AI tools are programmed to ensure they cite their sources on all claims, so they can be verified."
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
Read 10 tweets
Jan 14
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.

>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.



>>arxiv.org/pdf/2401.04854…
Read 11 tweets
Dec 7, 2023
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3 Screecap: "These two recommendations will need to be implemented with care. We have already noted the potential barrier to access. Secrecy concerns may also arise in some situations (e.g., some groups may be willing to share datasets but not demographic information, for fear of public relations backlash or to protect the safety of contributors to the dataset). That said, as consumers of datasets or products trained with them, NLP researchers, developers, and the general public would be well advised to use systems only if there is access to the information we propose should be included ...
Read 20 tweets
Nov 24, 2023
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
Read 27 tweets
Jun 11, 2023
There's a lot I like in this op-ed, but unfortunately it ends with some gratuitous ableism (and also weird remarks about AGI as a "holy grail").

First, the good parts:

theguardian.com/commentisfree/…
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"

>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."

>>
Read 7 tweets
Jun 3, 2023
I'm so tired of this argument. The "AI doomers" are not natural allies of the folks who have been documenting the real-world harms of so-called AI systems: discrimination, surveillance, pollution of the information ecosystem, data theft, labor exploitation.

>>
Those harms are real, they're being done by people to people using technology.

>>
When we push back against the ridiculous distraction tactics of the AI doomers on their media tour and then get told to "be nice" it's like telling folks working on addressing climate change to allocate time & resources to oil companies raising concerns about contrails.

>>
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(