@emilymbender.bsky.social Profile picture
Sep 28, 2022 12 tweets 7 min read Read on X
Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use case is to alert the cops so they can respond.

>>
Q1: Is it plausible that a system could give the purported output (time & location of gunshot) given the inputs (audio recordings from surveillance microphones deployed in a neighborhood)?

>>
A1: At a guess, such a system could detect loud noises that include gunshots (but lots of other things) and might be able to provide some location information (which mics picked it up?) but keep in mind that cityscapes provide lots of opportunities for echos...

>>
Q2: How was the system evaluated?

A2: We don't actually know, but the company says their "ground truth" data come from cops.

Source: aclu.org/news/privacy-t…

>> Screencap: "ShotSpotte...Screencap from link in twee...
.@MayorofSeattle @SeattleCouncil we should under no circumstances be deploying systems that have not been evaluated for accuracy by neutral third parties.

>>
@MayorofSeattle @SeattleCouncil Also relevant here: The company seems to be saying "Just because there's no evidence that we were right doesn't mean there wasn't gunfire." THIS IS NOT THE ATTITUDE OF CAREFUL ENGINEERS!

>> Screencap from same source:...
@MayorofSeattle @SeattleCouncil Q3: Who is harmed if the system gives inaccurate results?

A3: The citizens & residents of Seattle whose neighborhoods are repeatedly accosted by police coming in on high alert with the belief that a gun was just fired. What a recipe for disaster.

>>
@MayorofSeattle @SeattleCouncil Q4: Who is harmed if the system gives accurate results?

A4: Same, frankly. It is not at all clear that the people who live where these surveillance systems are set up benefit from police barging in on high alert. Who asked for this, @MayorofSeattle ? Does it meet their needs?
>>
@MayorofSeattle @SeattleCouncil Q5: What problem is the system meant to solve and how does the framing of the automated system narrow the type of solutions that are underconsideration?

>>
@MayorofSeattle @SeattleCouncil A5: Looks like the problem is gun violence. But framing the solution as starting from detecting the sound of gunshots is fundamentally reactive, and meets violence (and false reports of violence) with surveillance at best and state violence at worst.

>>
@MayorofSeattle @SeattleCouncil A5 cont: This framing leaves out of view all proactive approaches to reducing gun violence, starting with, ahem, FEWER GUNS but also programs that address root causes.

>>
In summary, hell no @MayorofSeattle and @SeattleCouncil. A tech city like Seattle should know better than to fall for #AISnakeOil and a city with Seattle's policing history must do better than to head down paths like these.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with @emilymbender.bsky.social

@emilymbender.bsky.social Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Nov 4
As OpenAI and Meta introduce LLM-driven searchbots, I'd like to once again remind people that neither LLMs nor chatbots are good technology for information access.

A thread, with links:

>>
@chirag_shah and I wrote about this in two academic papers:
2022: dl.acm.org/doi/10.1145/34…
2024: dl.acm.org/doi/10.1145/36…

We also have an op-ed from Dec 2022:
iai.tv/articles/all-k…

>>
Why are LLMs bad for search? Because LLMs are nothing more than statistical models of the distribution of word forms in text, set up to output plausible-sounding sequences of words.



>>
Read 15 tweets
Feb 29
It seems like there are just endless bad ideas about how to use "AI". Here are some new ones courtesy of the UK government.

... and a short thread because there is so much awfulness in this one article.
/1


ft.com/content/f2ae55…
Screencap: "UK ministers are piloting the use of generative artificial intelligence to analyse responses to government consultations and write draft answers to parliamentary questions.  Oliver Dowden, the deputy prime minister, will on Thursday unveil tools that the AI “crack squad” at the heart of Whitehall is trialling with a view to wider rollouts across central departments and public services."
Either it's a version of ChatGPT OR it's a search system where people can find the actual sources of the information. Both of those things can't be true at the same time. /2 Screencap: "The AI tools include using government-hosted versions of ChatGPT and a mix of open-source AI models securely hosted in-house to draft preliminary responses to questions to ministers submitted by MPs and to freedom of information requests.  The drafts would always be checked by a human civil servant and the AI tools are programmed to ensure they cite their sources on all claims, so they can be verified."
Also: the output of "generative AI", synthetic text, is NOT information. So, UK friends, if your government is actually using it to respond to freedom of information requests, they are presumably violating their own laws about freedom of information requests. /3
Read 10 tweets
Jan 14
It is depressing how often Bender & Koller 2020 is cited incorrectly. My best guess is that ppl writing abt whether or not LLMs 'understand' or 'are agents' have such strongly held beliefs abt what they want to be true that this impedes their ability to understand what we wrote.
Or maybe they aren't actually reading the paper --- just summarizing based on what other people (with similar beliefs) have mistakenly said about the paper.

>>
Today's case in point is a new arXiv posting, "Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs" by Lederman & Mahowald, posted Jan 10, 2024.



>>arxiv.org/pdf/2401.04854…
Read 11 tweets
Dec 7, 2023
A quick thread on #AIhype and other issues in yesterday's Gemini release: 1/
#1 -- What an utter lack of transparency. Researchers form multiple groups, including @mmitchell_ai and @timnitgebru when they were at Google, have been calling for clear and thorough documentation of training data & trained models since 2017. 2/
In Bender & Friedman 2018, we put it like this: /3 Screecap: "These two recommendations will need to be implemented with care. We have already noted the potential barrier to access. Secrecy concerns may also arise in some situations (e.g., some groups may be willing to share datasets but not demographic information, for fear of public relations backlash or to protect the safety of contributors to the dataset). That said, as consumers of datasets or products trained with them, NLP researchers, developers, and the general public would be well advised to use systems only if there is access to the information we propose should be included ...
Read 20 tweets
Nov 24, 2023
With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety" nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (+ some contacts from old hands who know how to handle ultra-rich man-children with god complexes). 🧵1/
As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
t the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
Read 27 tweets
Jun 11, 2023
There's a lot I like in this op-ed, but unfortunately it ends with some gratuitous ableism (and also weird remarks about AGI as a "holy grail").

First, the good parts:

theguardian.com/commentisfree/…
"[False arrests w/face rec tech] should be at the heart of one of the most urgent contemporary debates: that of artificial intelligence and the dangers it poses. That it is not, and that so few recognise it as significant, shows how warped has become the discussion of AI,"

>>
"We have stumbled into a digital panopticon almost without realising it. Yet to suggest we live in a world shaped by AI is to misplace the problem. There is no machine without a human, and nor is there likely to be."

>>
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(