New paper: 'On the Apparent Conflict Between Individual and Group Fairness' accepted at @fatconference, (now up on arxiv.org/abs/1912.06883). The paper addresses a distinction drawn between two broad approaches to measuring fairness in machine learning
'individual fairness' measures compare individuals, e.g. people 'similar' according to some task-relevant metric should get the same outcome. 'Group fairness' measures compare protected groups (e.g. gender, race, age) for differences in errors/outcomes/calibration/etc.
As typically presented, they reflect different + conflicting normative principles: group fairness seems to ensure egalitarian equality, whereas individual fairness seems to ensure consistency (the more/less qualified an individual, the better/worse their outcome).
But (I argue) the normative conflict only aligns that way if you pick a version of individual fairness which assumes there *isn't* any structural discrimination, and a version of group fairness which assumes there *is*.
But some versions of group fairness preserve consistency and ignore structural discrimination (e.g. equal calibration), and conversely, some versions of individual fairness can factor structural discrimination into the metric, adjusting scores so otherwise 'different' people...
... are 'similar' once disadvantage taken into account (e.g. in Dwork etal '12 re: SATs in college admissions). So conflict isnt between individual vs group per se; its about different worldviews, which can be reflected in variants from either kind of measure
Finally: individual fairness may appear to protect what 🇩🇪 constitution calls Einzelfallgerechtigkeit (individual justice). But even individually-fair ML is not individually just in this sense, because it still generalises between individuals who share same point in feature space
• • •
Missing some Tweet in this thread? You can try to
force a refresh
At the beginning of 2020 I was tired by the 'AI ethics' discourse. But by the end of the year, I'm feeling inspired and awed by the bravery, integrity, skill and wisdom of those who've taken meaningful political action against computationally-mediated exploitation and oppression.
The conversation has moved from chin-stroking, industry-friendly discussion, towards meaningful action, including worker organising, regulation, litigation, and building alternative structures. But this move from ethics to praxis inevitably creates fault lines between strategies.
Should we work to hold systems to account 'from the inside'? Legislate and enforce regulation from outside? Resist them from the ground up? Build alternative socio-technical systems aligned with counterpower?
Looking forward to reading this (recommendation of @gileslane), by the late Mike Cooley, engineer, academic, shop steward, activist behind the Lucas plan en.m.wikipedia.org/wiki/Mike_Cool…
NB: this book (from 1980) actually coined '*human-centred* systems', as an explicitly socialist and socio-technical political movement centering the needs of the people who make and use technology. A far cry from the kind of human-centred design critiqued by Don Norman (2005)
Some highlights:
Some like to think computers should do the calculating, while people do the creativity and value judgements. But the two can't just be combined "like chemical compounds". It doesn't scale.
Thread on possible implications of #SchremsII for end-to-end crypto approaches to protecting personal data. Background: last week the (CJEU) issued its judgment in Case C-311/18, “Schrems II”. Amongst other things, it invalidates Privacy Shield, one of the mechanisms
enabling transfers from EU-US. This was in part because US law lacks sufficient limitations on law enforcement access to data, so the protection of data in US not 'essentially equivalent' to that in the EU. Similar arguments could apply elsewhere (e.g. UK).
The main alternative mechanism enabling transfers outside the EEA is the use of 'standard contractual clauses' (SCCs) under Article 46(2)(c) GDPR. But the Court affirmed that SCCs also need to ensure 'essentially equivalent' protection.