Professor, Linguistics, UW // Faculty Director, Professional MS Program in Computational Linguistics (CLMS) // she/her // @firstname.lastname@example.org
6 added to My Authors
Nov 16 • 11 tweets • 5 min read
Facebook (sorry: Meta) AI: Check out our "AI" that lets you access all of humanity's knowledge.
Also Facebook AI: Be careful though, it just makes shit up.
This isn't even "they were so busy asking if they could"—but rather they failed to spend 5 minutes asking if they could.
Using a large LM as a search engine was a bad idea when it was proposed by a search company. It's still a bad idea now, from a social media company. Fortunately, @chirag_shah and I already wrote the paper laying that all out:
Today's #AIhype take-down + analysis (first crossposted to both Twitter & Mastodon): an "AI politician". vice.com/en/article/jgp…
Working from the reporting by @chloexiang at @motherboard, it appears that this is some sort of performance art, except that the project is (purports to be?) interacting with the actual Danish political system.
Oct 28 • 17 tweets • 7 min read
I guess it's a milestone for "AI" startups when they get their puff-pieces in the media. I want to highlight some obnoxious things about this one, on Cohere. #AIhype ahead...
At first glance, this headline seems to be claiming that from text messages (whose? accessed how?) an "AI" can detect mental health issues as well as human psychiatrists do (how? based on what data?).
Oct 3 • 7 tweets • 3 min read
It's good that @wired is covering this and shedding light on the unregulated mess that is the application of chatbots (and other so-called "AI") to mental health services.
Without the incentive to sell their systems (or generate clicks on articles) would these people really believe that e.g. writing a book is just choosing lots of word forms and putting them on the page?
Oct 2 • 7 tweets • 5 min read
No, a machine did not testify before congress. It is irresponsible for @jackclarkSF to claim that it did and for @dannyfortson to repeat that claim, with no distance or skepticism in the Sunday Times.
>> @jackclarkSF@dannyfortson Here is what the English verb "testify" means (per @MerriamWebster). 3/4 of these are things that a language model can do: it can't swear an oath, it can't speak from personal knowledge or bear witness, and it can't express a personal conviction.
Sep 28 • 12 tweets • 7 min read
Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use case is to alert the cops so they can respond.
Q1: Is it plausible that a system could give the purported output (time & location of gunshot) given the inputs (audio recordings from surveillance microphones deployed in a neighborhood)?
Sep 19 • 25 tweets • 12 min read
This article in the Atlantic by Stephen Marche is so full of #AIhype it almost reads like a self-parody. So, for your entertainment/education in spotting #AIhype, I present a brief annotated reading:
Straight out of the gate, he's not just comparing "AI" to "miracles" but flat out calling it one and quoting Google & Tesla (ex-)execs making comparisons to "God" and "demons".
Sep 14 • 12 tweets • 5 min read
*sigh* once again relegated to the critics' box. The framing in this piece leans so hard into the victims (no one believed us) persevering (we showed 'em!) narrative of the deep learning folks. #AIhype ahead:
"Success draws critics", uh nope. I'm not in this conversation because of whatever success deep learning has had. I'm in it because of the unfounded #AIhype and the harms being carried out in the name of so-called "AI".
Aug 25 • 5 tweets • 2 min read
This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of the lack of protections against data collection in our country, & of the mindset of tech solutionism that attempts to remove "failable" humans decision makers
Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it is that these folks are utterly failing at ceding any power & how completely mismatched "optimization" is from the goals of doing actual good in the world.
Just a few random excerpts, because it was so painful to read...
Jul 26 • 6 tweets • 3 min read
In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm pretty sure that pt was due to @mmitchell_ai but we all gladly signed off!) This particular instance was done carefully, however >>
vice.com/en/article/epz…@mmitchell_ai Given the pretraining+fine-tuning paradigm, I'm afraid we're going to see more and more of these, mostly not done with nearly the degree of care. See, for example, this terrible idea from AI21 labs:
Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. She ended with some really valuable ideas for going forward, in these slides:
Here, I really appreciated 3 "Think outside the AI/ML box".
As societies and as scientific communities, we are surely better served by exploring multiple paths rather than piling all resources (funding, researcher time & ingenuity) on MOAR DATA, MOAR COMPUTE! Friedman points out that this is *environmentally* urgent as well.
Jul 3 • 13 tweets • 3 min read
Not it effing can't. This headline is breathtakingly irresponsible.
1. Data was logs maintained by the cities in question (so data "collected" via reports to police/policing activity). 2. The only info for each incident they're using is location, time & type of crime.
Jul 1 • 14 tweets • 5 min read
New paper thread:
Precision grammars (grammars as software) can be beneficial for linguistic hypothesis testing and language description. In a new @NEJLangTech paper (Howell & Bender 2022) we ask:
to what extent can they be built automatically?
@NEJLangTech Built automatically out of what? Two rich sources of linguistic knowledge:
1. Collections of IGT (interlinear glossed text), reflecting linguistic analysis of the language 2. The Grammar Matrix customization system, a distillation of typological and syntactic analyses
Jun 29 • 13 tweets • 3 min read
Some reflections on media coverage of tech/science/research. It seems to me that there are broadly speaking two separate paths of origin for these stories: In one, the journalist sees something that they think the public should be informed of, and digs into the scholarship.
In the other, the researchers have something they want to draw the world's attention to. But there are two subcases here:
Researchers (usually in academia) who see a need for the public to be informed, either acutely (ppl need this info NOW) or long-term (science literacy).
Jun 11 • 5 tweets • 2 min read
I see lots and lots of that distraction. Every time one of you talks about LLMs, DALL-E etc as "a step towards AGI" or "reasoning" or "maybe slightly conscious" you are setting up a context in which people are led to believe that "AIs" are here that can "make decisions".
And then meanwhile OpenAI/Cohere/AI2 put out a weak-sauce "best practices" document which proclaims "represent diverse voices" as a key principle ... without any evidence of engaging with the work of the Black women scholars leading this field.
I guess the task of asking journalists to maintain a critical distance from so-called "AI" is going to be unending.
For those who don't see what the problem is, please see: medium.com/@emilymenonben…
This latest example comes from The Economist. It is a natural human reaction to *make sense of* what we see, but the thing is we have to keep in mind that all of that meaning making is on our side, not the machines'.