Let's do a little #AIhype analysis, shall we? Shotspotter claims to be able to detect gunshots from audio, and its use case is to alert the cops so they can respond.
Q1: Is it plausible that a system could give the purported output (time & location of gunshot) given the inputs (audio recordings from surveillance microphones deployed in a neighborhood)?
>>
A1: At a guess, such a system could detect loud noises that include gunshots (but lots of other things) and might be able to provide some location information (which mics picked it up?) but keep in mind that cityscapes provide lots of opportunities for echos...
>>
Q2: How was the system evaluated?
A2: We don't actually know, but the company says their "ground truth" data come from cops.
.@MayorofSeattle@SeattleCouncil we should under no circumstances be deploying systems that have not been evaluated for accuracy by neutral third parties.
>>
@MayorofSeattle@SeattleCouncil Also relevant here: The company seems to be saying "Just because there's no evidence that we were right doesn't mean there wasn't gunfire." THIS IS NOT THE ATTITUDE OF CAREFUL ENGINEERS!
A3: The citizens & residents of Seattle whose neighborhoods are repeatedly accosted by police coming in on high alert with the belief that a gun was just fired. What a recipe for disaster.
A4: Same, frankly. It is not at all clear that the people who live where these surveillance systems are set up benefit from police barging in on high alert. Who asked for this, @MayorofSeattle ? Does it meet their needs?
>>
@MayorofSeattle@SeattleCouncil Q5: What problem is the system meant to solve and how does the framing of the automated system narrow the type of solutions that are underconsideration?
>>
@MayorofSeattle@SeattleCouncil A5: Looks like the problem is gun violence. But framing the solution as starting from detecting the sound of gunshots is fundamentally reactive, and meets violence (and false reports of violence) with surveillance at best and state violence at worst.
>>
@MayorofSeattle@SeattleCouncil A5 cont: This framing leaves out of view all proactive approaches to reducing gun violence, starting with, ahem, FEWER GUNS but also programs that address root causes.
>>
In summary, hell no @MayorofSeattle and @SeattleCouncil. A tech city like Seattle should know better than to fall for #AISnakeOil and a city with Seattle's policing history must do better than to head down paths like these.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Step 1: Lead off with AI hype. AI is "profound"!! It helps people "unlock their potential"!!
There is some useful tech that meets the description in these paragraphs. But I don't think anything is clarified by calling machine translation or information extraction "AI".
>>
And then another instance of "standing in awe of scale". The subtext here is it's getting bigger so fast --- look at all of that progress! But progress towards what and measured how?
I suggest you read the whole thing, but some pull quotes:
>>
@danmcquillan "ChatGPT is a part of a reality distortion field that obscures the underlying extractivism and diverts us into asking the wrong questions and worrying about the wrong things." -- @danmcquillan
>>
"The compulsion to show 'balance' by always referring to AI's alleged potential for good should be dropped by acknowledging that the social benefits are still speculative while the harms have been empirically demonstrated."
@mathbabedotorg I do think there's a positive role for shame in this case --- shame here is reinforcing community values against "experimenting" with vulnerable populations without doing due diligence re research ethics.
>>
It seems that part of the #BigData#mathymath#ML paradigm is that people feel entitled to run experiments involving human subjects who haven't had relevant training in research ethics—y'know computer scientists bumbling around thinking they have the solutions to everything. >>
There's a certain kind of techbro who thinks it's a knock-down argument to say "Well, you haven't built anything". As if the only people whose expertise counts are those close to the machine. I'm reminded (again) of @timnitGebru 's wise comments on "the hierarchy of knowledge".>>
I've been pondering some recently about where that hierarchy comes from. It's surely reinforced by the way that $$ (both commercial and, sadly, federal research funds) tends to flow --- and people mistaking VCs, for example, as wise decision makers.
>>
But I also think that some of it has roots in the way different subjects are taught. Math & CS are both (frequently) taught in very gate-keepy ways (think weeder classes) and also students are evaluated with very cut & dried exams.
Trying out You.com because people are excited about their chat bot. First observation: Their disclaimer. Here's this thing we're putting up for everyone to use while also knowing (and saying) that it actually doesn't work.
Second observation: The footnotes, allegedly giving the source of the information provided in chatbot style, are difficult to interpret. How much of that paragraph is actually sourced from the relevant page? Where does the other "info" come from?
A few of the queries I tried returned paragraphs with no footnotes at all.
Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to be doing as we access and assess in formation.