New paper: 'On the Apparent Conflict Between Individual and Group Fairness' accepted at @fatconference, (now up on arxiv.org/abs/1912.06883). The paper addresses a distinction drawn between two broad approaches to measuring fairness in machine learning
'individual fairness' measures compare individuals, e.g. people 'similar' according to some task-relevant metric should get the same outcome. 'Group fairness' measures compare protected groups (e.g. gender, race, age) for differences in errors/outcomes/calibration/etc.
As typically presented, they reflect different + conflicting normative principles: group fairness seems to ensure egalitarian equality, whereas individual fairness seems to ensure consistency (the more/less qualified an individual, the better/worse their outcome).
But (I argue) the normative conflict only aligns that way if you pick a version of individual fairness which assumes there *isn't* any structural discrimination, and a version of group fairness which assumes there *is*.
But some versions of group fairness preserve consistency and ignore structural discrimination (e.g. equal calibration), and conversely, some versions of individual fairness can factor structural discrimination into the metric, adjusting scores so otherwise 'different' people...
... are 'similar' once disadvantage taken into account (e.g. in Dwork etal '12 re: SATs in college admissions). So conflict isnt between individual vs group per se; its about different worldviews, which can be reflected in variants from either kind of measure
Finally: individual fairness may appear to protect what 🇩🇪 constitution calls Einzelfallgerechtigkeit (individual justice). But even individually-fair ML is not individually just in this sense, because it still generalises between individuals who share same point in feature space

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Reuben Binns

Reuben Binns Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @RDBinns

31 Dec 20
At the beginning of 2020 I was tired by the 'AI ethics' discourse. But by the end of the year, I'm feeling inspired and awed by the bravery, integrity, skill and wisdom of those who've taken meaningful political action against computationally-mediated exploitation and oppression.
The conversation has moved from chin-stroking, industry-friendly discussion, towards meaningful action, including worker organising, regulation, litigation, and building alternative structures. But this move from ethics to praxis inevitably creates fault lines between strategies.
Should we work to hold systems to account 'from the inside'? Legislate and enforce regulation from outside? Resist them from the ground up? Build alternative socio-technical systems aligned with counterpower?
Read 10 tweets
9 Nov 20
Looking forward to reading this (recommendation of @gileslane), by the late Mike Cooley, engineer, academic, shop steward, activist behind the Lucas plan en.m.wikipedia.org/wiki/Mike_Cool… Image
NB: this book (from 1980) actually coined '*human-centred* systems', as an explicitly socialist and socio-technical political movement centering the needs of the people who make and use technology. A far cry from the kind of human-centred design critiqued by Don Norman (2005)
Some highlights:
Some like to think computers should do the calculating, while people do the creativity and value judgements. But the two can't just be combined "like chemical compounds". It doesn't scale. ImageImage
Read 7 tweets
22 Jul 20
Thread on possible implications of #SchremsII for end-to-end crypto approaches to protecting personal data. Background: last week the (CJEU) issued its judgment in Case C-311/18, “Schrems II”. Amongst other things, it invalidates Privacy Shield, one of the mechanisms
enabling transfers from EU-US. This was in part because US law lacks sufficient limitations on law enforcement access to data, so the protection of data in US not 'essentially equivalent' to that in the EU. Similar arguments could apply elsewhere (e.g. UK).
The main alternative mechanism enabling transfers outside the EEA is the use of 'standard contractual clauses' (SCCs) under Article 46(2)(c) GDPR. But the Court affirmed that SCCs also need to ensure 'essentially equivalent' protection.
Read 11 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!