This story (by @nitashatiku) is really sad, and I think an important window into the risks of designing systems to seem like humans, which are exacerbated by #AIhype:
@nitashatiku As I am quoted in the piece: “We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them”
>>
@nitashatiku But it isn't only or even primarily about individual humans learning how to conceptualize what these systems are doing---we also need both regulation and design practices around transparency.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".
>>
And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.
Without actually being in conversation (or better, if you could build those connections, in community) with the voices you said "we should represent" but then ignore/erase/avoid, you can't possibly see the things that the "gee-whiz look AGI!" discourse is distracting from.
This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.
And can I just add that the tendency of journalists who write like this to center their own experience of awe---instead of actually informing the public---strikes me as quite self-absorbed.
I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't agreed on what is ethical/moral/right"
This always feels like a cop-out to me, and I think I've put my finger on why:
>>
That argument presupposes that the goal is to create autonomous systems that will "know" how to behave "ethically".
tl;dr blog post by new VP of AI at Halodi says the quiet parts out loud: "AI" industry is all about surveillance capitalism, sees gov't or even self- regulation as needless hurdles, and the movers & shakers are uninterested in building things that work. A thread:
First, here's the blog post, so you have the context:
1. No, LLMs can't do literature reviews. 2. Anyone who thinks a literature review can be automated doesn't understand what the purpose of a literature review is.
3. The web page linked to provides exactly 0 information about how this system was evaluated or even what it is designed for. Any they are targeting it a researchers? I sure hope researchers are more critical than they seem to expect.
You read it and/or hit a paywall and still want my analysis? Okay, here we go:
First, let's note the good intentions. The journalist reports that mental health services are hard to access (because insufficient, but maybe not only that), and it would be good to have automated systems that help out.