This piece is stunning: stunningly beautifully written, stunningly painful, and stunningly damning of family policing, of the lack of protections against data collection in our country, & of the mindset of tech solutionism that attempts to remove "failable" humans decision makers
.@UpFromTheCracks 's essay is both a powerful call for the immediate end of family policing and an extremely pointed case study in so many aspects of what gets called #AIethics:
1. What are the potentials for harm from algorithmic decision making?
>>
2. The absolutely essential effects of lived experience and positionality to understanding those harms. 3. The ways in which data collection sets up future harms.
>>
4. The ways in which the questions asked determine the possible answers/outcomes. 5. Again, the absolutely essential effects of lived experience and positionality to understanding the harms of those outcomes.
>>
I could go on, but in short: This is required reading for anyone working anywhere near #AIethics, algorithmic decision making, data protection, and/or child "welfare" (aka family policing).
Read the recent Vox article about effective altruism ("EA") and longtermism and I'm once again struck by how *obvious* it is that these folks are utterly failing at ceding any power & how completely mismatched "optimization" is from the goals of doing actual good in the world.
>>
In Stochastic Parrots, we referred to attempts to mimic human behavior as a bright line in ethical AI development" (I'm pretty sure that pt was due to @mmitchell_ai but we all gladly signed off!) This particular instance was done carefully, however >>
@mmitchell_ai Given the pretraining+fine-tuning paradigm, I'm afraid we're going to see more and more of these, mostly not done with nearly the degree of care. See, for example, this terrible idea from AI21 labs:
@mmitchell_ai As Dennett says in the VICE article, regulation is needed---I'd add: regulation informed by an understanding of both how the systems work and how people react to them.
Thinking back to Batya Friedman (of UW's @TechPolicyLab and Value Sensitive Design Lab)'s great keynote at #NAACL2022. She ended with some really valuable ideas for going forward, in these slides:
Here, I really appreciated 3 "Think outside the AI/ML box".
>>
As societies and as scientific communities, we are surely better served by exploring multiple paths rather than piling all resources (funding, researcher time & ingenuity) on MOAR DATA, MOAR COMPUTE! Friedman points out that this is *environmentally* urgent as well.
>>
Where above she draws on the lessons of nuclear power (what other robust sources of non-fossil energy would we have now, if we'd spread our search more broadly back then?) here she draws on the lessons of plastics: they are key for some use case (esp medical). >>
Some interesting 🙃 details from the underlying Nature article:
1. Data was logs maintained by the cities in question (so data "collected" via reports to police/policing activity). 2. The only info for each incident they're using is location, time & type of crime.
>>
3. A prediction was counted as "correct" if a crime (by their def) occurred in the (small) area on the day of prediction or one day before or after.
Precision grammars (grammars as software) can be beneficial for linguistic hypothesis testing and language description. In a new @NEJLangTech paper (Howell & Bender 2022) we ask:
to what extent can they be built automatically?
@NEJLangTech Built automatically out of what? Two rich sources of linguistic knowledge:
1. Collections of IGT (interlinear glossed text), reflecting linguistic analysis of the language 2. The Grammar Matrix customization system, a distillation of typological and syntactic analyses
>>
This is the latest update from the AGGREGATION project (underway since ~2012), and builds on much previous work, by @OlgaZamaraeva, Goodman, @fxia8, @ryageo, Crowgey, Wax and others!