*sigh* once again relegated to the critics' box. The framing in this piece leans so hard into the victims (no one believed us) persevering (we showed 'em!) narrative of the deep learning folks. #AIhype ahead:
"Success draws critics", uh nope. I'm not in this conversation because of whatever success deep learning has had. I'm in it because of the unfounded #AIhype and the harms being carried out in the name of so-called "AI".
>>
"huge progress ... in some key applications like computer vision and language" --- uh "language" isn't an application, TYVM.
And I am not trying to "take away" any actual progress (e.g. improved ASR, MT). I'm only taking issue with overclaims.
>>
There are five people quoted in the article. But there are three photos: Geoffrey Hinton, Yann LeCun, and Fei-fei Li. It's a hagiography of them. Gary Marcus and I are in there as "critics" to be "dismissed".
>>
I'm glad at least some of the points I was making about societal implications made it in (though I never said "gone too far", that suggests there's some coherent path here).
>>
But then she gives LeCun the space to do this rebuttal (though it is not at all clear that he was shown my words; these quotes could have been in response to generic questions about "AI ethicists"):
>>
This makes it sound like he thinks I'm simplifying something, if his words really are in response to mine. But even if not: scholars like Noble, Benjamin, Broussard, Raji, Gebru, Birhane, Marshall are the ones diving in and exploring the complexities!
>>
And, frankly, the implication that only the people who build these things are qualified to comment on their societal implications/#AIethics shows just how naïve and *un*qualified LeCun is in this area.
Note: I'm assuming naïveté and not ill-intent. Generously.
>>
When the leaders of the field are unable to listen to and learn from the amazing Black women scholars doing this work, is it any surprise that DEI efforts are failing?
>>
It's not enough to recruit people from marginalized & otherwise underrepresented groups into the field. Without co-ownership of the relevant spaces, it won't be feasible for them to stay.
>>
Google pushed out Dr. @timnitGebru and Dr. @mmitchell_ai rather than let them lead towards a more diverse work environment.
>>
So, lesson learned. Just because a reporter seems (with their initial query) to be interested in writing a piece that doesn't succumb to the AI hype doesn't mean they have actually extricated themselves.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.
>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"
>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."