Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. She ended with some really valuable ideas for going forward, in these slides:

Here, I really appreciated 3 "Think outside the AI/ML box".

>> Screenshot of slide, with t...
As societies and as scientific communities, we are surely better served by exploring multiple paths rather than piling all resources (funding, researcher time & ingenuity) on MOAR DATA, MOAR COMPUTE! Friedman points out that this is *environmentally* urgent as well.

>>
Where above she draws on the lessons of nuclear power (what other robust sources of non-fossil energy would we have now, if we'd spread our search more broadly back then?) here she draws on the lessons of plastics: they are key for some use case (esp medical). >> Screenshot of slide, with t...
Similarly, there may be life-critical or other important cases where AI/ML really is the best bet, and we can decide to use it there, being mindful that we are using something that has impactful materiality and so should be used sparingly.

>>
Finally, I really appreciated this message of responsibility of the public. How we talk about these things matters, because we need to be empowering the public to make good decisions around regulation.

>> Screenshot of slide, with h...
As an example, she gives an alternative visualization of "the cloud" that makes its materiality more apparent (but still feels some steps removed from e.g. the mining operations required to create that equipment).

>> Pencil drawing where the ou...
Friedman's emphasis was on materiality & the environment, but this point holds equally true for the way we communicate about what so-called "AI" does, how it relates to data, etc.

Thanks again to Batya for such a great talk and to #NAACL2022 for bringing her to us!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Emily M. Bender

Emily M. Bender Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Jul 26
In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm pretty sure that pt was due to @mmitchell_ai but we all gladly signed off!) This particular instance was done carefully, however >>

vice.com/en/article/epz…
@mmitchell_ai Given the pretraining+fine-tuning paradigm, I'm afraid we're going to see more and more of these, mostly not done with nearly the degree of care. See, for example, this terrible idea from AI21 labs:

washingtonpost.com/technology/202…

>>
@mmitchell_ai As Dennett says in the VICE article, regulation is needed---I'd add: regulation informed by an understanding of both how the systems work and how people react to them.

vice.com/en/article/epz…

>> Screenshot from linked arti...
Read 6 tweets
Jul 3
Not it effing can't. This headline is breathtakingly irresponsible.

h/t @hypervisible

bloomberg.com/news/articles/…
Some interesting 🙃 details from the underlying Nature article:

1. Data was logs maintained by the cities in question (so data "collected" via reports to police/policing activity).
2. The only info for each incident they're using is location, time & type of crime.

>>
3. A prediction was counted as "correct" if a crime (by their def) occurred in the (small) area on the day of prediction or one day before or after.

>>
Read 13 tweets
Jul 1
New paper thread:

Precision grammars (grammars as software) can be beneficial for linguistic hypothesis testing and language description. In a new @NEJLangTech paper (Howell & Bender 2022) we ask:
to what extent can they be built automatically?

nejlt.ep.liu.se/article/view/4…

>>
@NEJLangTech Built automatically out of what? Two rich sources of linguistic knowledge:

1. Collections of IGT (interlinear glossed text), reflecting linguistic analysis of the language
2. The Grammar Matrix customization system, a distillation of typological and syntactic analyses

>>
This is the latest update from the AGGREGATION project (underway since ~2012), and builds on much previous work, by @OlgaZamaraeva, Goodman, @fxia8, @ryageo, Crowgey, Wax and others!

depts.washington.edu/uwcl/aggregati…

>>
Read 14 tweets
Jun 29
Some reflections on media coverage of tech/science/research. It seems to me that there are broadly speaking two separate paths of origin for these stories: In one, the journalist sees something that they think the public should be informed of, and digs into the scholarship.

>>
In the other, the researchers have something they want to draw the world's attention to. But there are two subcases here:

Researchers (usually in academia) who see a need for the public to be informed, either acutely (ppl need this info NOW) or long-term (science literacy).
>>
Subcase 2b: PR orgs (usually in industry) want the public to know about their cool research, because it serves as positive marketing for a company, halo effect of competence, etc etc.

>>
Read 13 tweets
Jun 11
I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".
>>
And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.



>>
Without actually being in conversation (or better, if you could build those connections, in community) with the voices you said "we should represent" but then ignore/erase/avoid, you can't possibly see the things that the "gee-whiz look AGI!" discourse is distracting from.
Read 5 tweets
Jun 9
I guess the task of asking journalists to maintain a critical distance from so-called "AI" is going to be unending.

For those who don't see what the problem is, please see: medium.com/@emilymenonben… Screen cap reading "Ot...
This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.

economist.com/interactive/br…

>>
And can I just add that the tendency of journalists who write like this to center their own experience of awe---instead of actually informing the public---strikes me as quite self-absorbed.

>>
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(