People often ask me if I think computers could ever understand language. You might be surprised to hear that my answer is yes! My quibble isn't with "understand", it's with "human level" and "general".
To answer that question, of course, we need a definition of understanding. I like the one from Bender & @alkoller 2020: Meaning is the relationship between form and something external to language and understanding is retrieving that intent from form.
>>
So when I ask a digital voice assistant to set a timer for a specific time, or to retrieve information about the current temperature outside, or to play the radio on a particular station, or to dial a certain contact's phone number and it does the thing: it has understood.
>>
Has it understood the same way or as well as a human would? No. It doesn't make inferences about what the timer is for based on shared context with me or wonder what I plan to do outdoors.
>>
But that's okay, because it's a tool, involving limited language understanding, and it has served its purpose. And it's a very impressive and interesting tool! Language is cool and building computer systems that can usefully process language is exciting!
>>
In other words: linguistics, computational linguistics, and #NLPRoc all collectively and separately have value completely unrelated to the project of "AI".
>>
But the #AIhype is making it harder to do that work. When AI bros say their mathy maths are completely general solutions to everything language & people believe them, folks working on the actual details of language, lg use, functioning lg tech have to first push through that.
I'm unmoved when people talk about one danger of #AIhype being the prospect of it bringing on another AI winter. But I do care that #AIhype is making it harder (in this and many ways) for researchers grounded in the details of their research area to do our work.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.
>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"
>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."
I'm so tired of this argument. The "AI doomers" are not natural allies of the folks who have been documenting the real-world harms of so-called AI systems: discrimination, surveillance, pollution of the information ecosystem, data theft, labor exploitation.
Those harms are real, they're being done by people to people using technology.
>>
When we push back against the ridiculous distraction tactics of the AI doomers on their media tour and then get told to "be nice" it's like telling folks working on addressing climate change to allocate time & resources to oil companies raising concerns about contrails.