Much of the #AIhype seems to be cases where people confuse the artifact that results from human cognitive (incl creative) activity with the cognitive activity itself.
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?
>>
And given the incentive to sell their tech (+ all the world telling them how smart they are, I guess), do they actually believe that now?
>>
I suppose the other possibility is that they know it's more than that and they've gone all in on the magical thinking that their mathy maths have some mysterious emergent properties.
>>
It would all be just sad if these folks were just entertaining themselves in private, rather than receiving billions of dollars of VC funding, testifying before Congress, etc.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
It's good that @WIRED is covering this and shedding light on the unregulated mess that is the application of chatbots (and other so-called "AI") to mental health services.
@WIRED Surely it must be possible to cover these stories without writing headlines that suggest the "AI" is something that can have agency. No AI is "trying" to do anything.
>>
Furthermore, somehow this article fails to get into the ENORMOUS risks of the surveillance side of this. How are those companies handling user data? Are they ensuring those conversations only stay on the user's local device (hah, that would be nice)? Who are they selling it to?
No, a machine did not testify before congress. It is irresponsible for @jackclarkSF to claim that it did and for @dannyfortson to repeat that claim, with no distance or skepticism in the Sunday Times.
>>
@jackclarkSF@dannyfortson Here is what the English verb "testify" means (per @MerriamWebster). 3/4 of these are things that a language model can do: it can't swear an oath, it can't speak from personal knowledge or bear witness, and it can't express a personal conviction.
>>
@jackclarkSF@dannyfortson@MerriamWebster Those are all category errors: language models are computer programs designed to model the distribution of word forms in text, and nothing more. They don't have convictions or personal knowledge and aren't the sort of entity that can be bound by an oath.
This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment/education in spotting #AIhype, I present a brief annotated reading:
Straight out of the gate, he's not just comparing "AI" to "miracles" but flat out calling it one and quoting Google & Tesla (ex-)execs making comparisons to "God" and "demons".
/2
This is not the writing of someone who actually knows what #NLProc is. If you use grammar checkers, autocorrect, online translation services, web search, autocaptions, a voice assistant, etc you use NLP technology in everyday life. But guess what? NLP isn't a subfield of "AI".
/3
This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of the lack of protections against data collection in our country, & of the mindset of tech solutionism that attempts to remove "failable" humans decision makers
.@UpFromTheCracks 's essay is both a powerful call for the immediate end of family policing and an extremely pointed case study in so many aspects of what gets called #AIethics:
1. What are the potentials for harm from algorithmic decision making?
>>
2. The absolutely essential effects of lived experience and positionality to understanding those harms. 3. The ways in which data collection sets up future harms.