*sigh* once again relegated to the critics' box. The framing in this piece leans so hard into the victims (no one believed us) persevering (we showed 'em!) narrative of the deep learning folks. #AIhype ahead:
"Success draws critics", uh nope. I'm not in this conversation because of whatever success deep learning has had. I'm in it because of the unfounded #AIhype and the harms being carried out in the name of so-called "AI".
>>
"huge progress ... in some key applications like computer vision and language" --- uh "language" isn't an application, TYVM.
And I am not trying to "take away" any actual progress (e.g. improved ASR, MT). I'm only taking issue with overclaims.
>>
There are five people quoted in the article. But there are three photos: Geoffrey Hinton, Yann LeCun, and Fei-fei Li. It's a hagiography of them. Gary Marcus and I are in there as "critics" to be "dismissed".
>>
I'm glad at least some of the points I was making about societal implications made it in (though I never said "gone too far", that suggests there's some coherent path here).
>>
But then she gives LeCun the space to do this rebuttal (though it is not at all clear that he was shown my words; these quotes could have been in response to generic questions about "AI ethicists"):
>>
This makes it sound like he thinks I'm simplifying something, if his words really are in response to mine. But even if not: scholars like Noble, Benjamin, Broussard, Raji, Gebru, Birhane, Marshall are the ones diving in and exploring the complexities!
>>
And, frankly, the implication that only the people who build these things are qualified to comment on their societal implications/#AIethics shows just how naïve and *un*qualified LeCun is in this area.
Note: I'm assuming naïveté and not ill-intent. Generously.
>>
When the leaders of the field are unable to listen to and learn from the amazing Black women scholars doing this work, is it any surprise that DEI efforts are failing?
>>
It's not enough to recruit people from marginalized & otherwise underrepresented groups into the field. Without co-ownership of the relevant spaces, it won't be feasible for them to stay.
>>
Google pushed out Dr. @timnitGebru and Dr. @mmitchell_ai rather than let them lead towards a more diverse work environment.
>>
So, lesson learned. Just because a reporter seems (with their initial query) to be interested in writing a piece that doesn't succumb to the AI hype doesn't mean they have actually extricated themselves.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.
>>
It seems that part of the #BigData#mathymath#ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>
There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>>
I've been pondering some recently about where that hierarchy comes from. It's surely reinforced by the way that $$ (both commercial and, sadly, federal research funds) tends to flow --- and people mistaking VCs, for example, as wise decision makers.
>>
But I also think that some of it has roots in the way different subjects are taught. Math & CS are both (frequently) taught in very gate-keepy ways (think weeder classes) and also students are evaluated with very cut & dried exams.
Trying out You.com because people are excited about their chat bot. First observation: Their disclaimer. Here's this thing we're putting up for everyone to use while also knowing (and saying) that it actually doesn't work.
Second observation: The footnotes, allegedly giving the source of the information provided in chatbot style, are difficult to interpret. How much of that paragraph is actually sourced from the relevant page? Where does the other "info" come from?
A few of the queries I tried returned paragraphs with no footnotes at all.
Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to be doing as we access and assess in formation.
We're seeing multiple folks in #NLProc who *should know better* bragging about using #ChatGPT to help them write papers. So, I guess we need a thread of why this a bad idea:
>>
1- The writing is part of the doing of science. Yes, even the related work section. I tell my students: Your job there is show how your work is building on what has gone before. This requires understanding what has gone before and reasoning about the difference.
>>
The result is a short summary for others to read that you the author vouch for as accurate. In general, the practice of writing these sections in #NLProc (and I'm guessing CS generally) is pretty terrible. But off-loading this to text synthesizers is to make it worse.
@willknight The 1st is somewhat subtle. Saying this ability has been "unlocked" paints a picture where there is a pathway to some "AI" and what technologists are doing is figuring out how to follow that path (with LMs, no less!). SciFi movies are not in fact documentaries from the future. >>
@willknight Far more problematic is the closing quote, wherein Knight returns to the interviewee he opened with (CEO of a coding tools company) and platforms her opinions about "AI" therapists.