I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".
>>
And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.
Without actually being in conversation (or better, if you could build those connections, in community) with the voices you said "we should represent" but then ignore/erase/avoid, you can't possibly see the things that the "gee-whiz look AGI!" discourse is distracting from.
Someone who was genuinely interested in using their $$ to protect against harms done in the name of AI would be funding orgs like @DAIRInstitute@C2i2_UCLA and @ruha9 's #IdaLab. Theirs is the work that brings us closer to justice and tech that benefits society.
I don't see any current or future problems facing humanity that are addressed by building ever larger LMs, with or without calling them AGI, with or without ethics washing, with or without claiming "for social good".
• • •
Missing some Tweet in this thread? You can try to
force a refresh
This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.
And can I just add that the tendency of journalists who write like this to center their own experience of awe---instead of actually informing the public---strikes me as quite self-absorbed.
I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't agreed on what is ethical/moral/right"
This always feels like a cop-out to me, and I think I've put my finger on why:
>>
That argument presupposes that the goal is to create autonomous systems that will "know" how to behave "ethically".
tl;dr blog post by new VP of AI at Halodi says the quiet parts out loud: "AI" industry is all about surveillance capitalism, sees gov't or even self- regulation as needless hurdles, and the movers & shakers are uninterested in building things that work. A thread:
First, here's the blog post, so you have the context:
1. No, LLMs can't do literature reviews. 2. Anyone who thinks a literature review can be automated doesn't understand what the purpose of a literature review is.
3. The web page linked to provides exactly 0 information about how this system was evaluated or even what it is designed for. Any they are targeting it a researchers? I sure hope researchers are more critical than they seem to expect.
You read it and/or hit a paywall and still want my analysis? Okay, here we go:
First, let's note the good intentions. The journalist reports that mental health services are hard to access (because insufficient, but maybe not only that), and it would be good to have automated systems that help out.
.@jeffdean is quoted here saying about Stochastic Parrots that “surveyed valid concerns with large language models, and in fact many teams at Google are actively working on these issues.”
@JeffDean (His earlier comments that Stochastic Parrots “didn’t meet our bar for publication” are also cited --- nvm that it was published, after **anonymous** peer review, and that the PALM paper is just a preprint...)
>>
Anyway: "Google is actively working on these issues" is not a satisfactory response to the concerns that we & others have raised, esp when they come out with papers like the PaLM paper which are so utterly sloppy in how they handle ethical considerations.