You read it and/or hit a paywall and still want my analysis? Okay, here we go:
First, let's note the good intentions. The journalist reports that mental health services are hard to access (because insufficient, but maybe not only that), and it would be good to have automated systems that help out.
Also, the reporter notes that it would be diagnostically helpful to have externally observable ("objective") indicators of mental health. Not my field, but this is believable to me.
And now we are squarely in the ML techno-solutionism danger zone: It's established that it would be beneficial to have something that can do X with only Y input, but not that it's actually possible to do X with only Y input.
On the other hand, you can always train an ML system that takes ys (elements of Y) as input and gives xs (elements of X) as output and thus LOOKS LIKE it's doing X with only Y input.
So what is the evidence that anything could do X (provide mental health diagnoses) with only Y (voice recordings) input? Our emotional state (depression, anxiety) can affect our speech:
So, there might be some signal there. But is it enough to do anything reliable? Under what conditions? (Compare e.g. what needs to be true for accurate blood pressure readings, and the fact that even physiological medical tech is insufficiently tested on non-white, non-men.)
But never fear! The AI can pick up all the details! 🙃🙃🙃
So, we're being asked to believe, here, that not only is it possible to do the thing that we wish could be done, it's possible because "AI" is supposedly better than humans at doing this thing (modeling human emotional states).
Note also the jump from maybe-there's-evidence for anxiety and depression being observable via voice to also diagnosing schizophrenia and PTSD.
It makes me sad to see domain experts being drawn in in this way. I can't tell if Dr. Bentley is being quoted out of context, or if she actually believes the hype.
Getting down to the level of individual sentences in this article for a bit, note the work that "perfectly" is doing here. This makes it sound like "AI" is some pre-existing natural phenomenon which just happens to be a good match for this problem.
Also, that em-dash makes it hard to tell if this is a bald assertion or part of what some AI researchers believe or what they believe might be the case. I'm guessing the average reader will miss that nuance and read it as a bald assertion.
Another type of #AIhype shows up subtly: the framing "little human oversight" which suggests autonomy on the part of the system. So-called "AI" systems are only artifacts, but the more they are hyped as autonomous agents, the easier it is to believe that they can do magic.
The article does point out that for "mainstream" use, the technology would have to be tested to medical standards. Quoting Dr. Bentley again:
But at the same time, the article is referring to apps that are already commercially available---the journalist tested two of them on herself. So I guess "mainstream" really means only "within the medical establishment" here?
And this brings me to my final point: dual use. If these systems are out there, purporting to measure aspects of mental health (one is called Mental Fitness, ffs) on the basis of short recordings, who else is going to use them, on whom, and to what ends?
I want to see all reporting on applications of so-called "AI" asking these questions. Can it be used for surveillance? Can it be used for stalking? How might the claims being made by the developers shape the way it could be used?
/fin
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.
>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"
>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."