I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".
>>
And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.



>>
Without actually being in conversation (or better, if you could build those connections, in community) with the voices you said "we should represent" but then ignore/erase/avoid, you can't possibly see the things that the "gee-whiz look AGI!" discourse is distracting from.
Someone who was genuinely interested in using their $$ to protect against harms done in the name of AI would be funding orgs like @DAIRInstitute @C2i2_UCLA and @ruha9 's #IdaLab. Theirs is the work that brings us closer to justice and tech that benefits society.
I don't see any current or future problems facing humanity that are addressed by building ever larger LMs, with or without calling them AGI, with or without ethics washing, with or without claiming "for social good".

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Emily M. Bender

Emily M. Bender Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Jun 9
I guess the task of asking journalists to maintain a critical distance from so-called "AI" is going to be unending.

For those who don't see what the problem is, please see: medium.com/@emilymenonben… Screen cap reading "Ot...
This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.

economist.com/interactive/br…

>>
And can I just add that the tendency of journalists who write like this to center their own experience of awe---instead of actually informing the public---strikes me as quite self-absorbed.

>>
Read 4 tweets
May 25
I not infrequently see an argument that goes: "Making ethical NLP (or "AI") systems is too hard because humans haven't agreed on what is ethical/moral/right"

This always feels like a cop-out to me, and I think I've put my finger on why:

>>
That argument presupposes that the goal is to create autonomous systems that will "know" how to behave "ethically".

But if you actually seriously engage with the work of authors like @ruha9 @safiyanoble @timnitGebru @csdoctorsister @Abebab @rajiinio @jovialjoy & others

>>
What you'll find is that the proposed solutions aren't autonomous systems that are "ethical", but rather:

1. (Truly) democratic oversight into what systems are deployed.
2. Transparency, so human operators can contextualize system output.

>>
Read 8 tweets
Apr 29
tl;dr blog post by new VP of AI at Halodi says the quiet parts out loud: "AI" industry is all about surveillance capitalism, sees gov't or even self- regulation as needless hurdles, and the movers & shakers are uninterested in building things that work. A thread:
First, here's the blog post, so you have the context:
I came across this blog post first when the graphic about data moats was shared with me: Graphic entitled "Data Moats in the Era of Large Models
Read 24 tweets
Apr 25
1. No, LLMs can't do literature reviews.
2. Anyone who thinks a literature review can be automated doesn't understand what the purpose of a literature review is.

>>
3. The web page linked to provides exactly 0 information about how this system was evaluated or even what it is designed for. Any they are targeting it a researchers? I sure hope researchers are more critical than they seem to expect.

>>
4. What's the denominator for that 60% I wonder?



>>
Read 4 tweets
Apr 6
I find this reporting infuriating, so I'm going to use it to create a mini-lesson in detecting #AIhype.

If you're interested in following this lesson, please read the article, making note of what you think sounds exciting and what makes you skeptical.

nytimes.com/2022/04/05/tec…
You read it and/or hit a paywall and still want my analysis? Okay, here we go:
First, let's note the good intentions. The journalist reports that mental health services are hard to access (because insufficient, but maybe not only that), and it would be good to have automated systems that help out.
Read 19 tweets
Apr 5
.@jeffdean is quoted here saying about Stochastic Parrots that “surveyed valid concerns with large language models, and in fact many teams at Google are actively working on these issues.”

>>
@JeffDean (His earlier comments that Stochastic Parrots “didn’t meet our bar for publication” are also cited --- nvm that it was published, after **anonymous** peer review, and that the PALM paper is just a preprint...)

>>
Anyway: "Google is actively working on these issues" is not a satisfactory response to the concerns that we & others have raised, esp when they come out with papers like the PaLM paper which are so utterly sloppy in how they handle ethical considerations.

Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(