Thread summary of new paper (at #CHI2021): "Exploring Design and Governance Challenges in the Development of Privacy-Preserving Computation" w/ Nitin Agrawal, @emax, Kim Laine & @Nigel_Shadboltarxiv.org/abs/2101.08048
New techniques for 'privacy-preserving computation'(PPC) (inc. homomorphic encryption, secure multi-party computation, differential privacy) present novel questions for HCI, from usability and mental models, to the values they embed/elide, & their social + political implications
We interviewed experts working on PPC, from academia, industry, law & policy, and design, and identified challenges in moving from theory to practice, interdisciplinarity translation, developer usability, explanation, governance and accountability
PPCs challenge typical approaches to abstraction and complexity; hiding implementation details behind libraries and APIs makes it difficult for developers to optimise through engineering 'tricks', and to reason sensibly about the parameters that determine security guarantees
While PPCs aim to protect 'privacy', we found this unpacked in subtly different ways, from human rights to corporate secrets and regulatory compliance. Several pointed to challenges in explaining and justifying these systems to decision-makers, stakeholders, and society at large
In addition to studying 'acceptability' for end-users / data subjects, it is equally important to consider the plurality of different actors and contexts through which values like privacy will be understood, traded-off, and embedded in them (or not)
PPC techniques have an aura of mystique. Their construction is more craft than science, and their inner workings and risks can't be easily communicated. There's a risk that they become not just technical but technocratic solutions (we call 'privacy-enhancing technocracy')
Discourse around PPCs may also function to distract us from the ways computation might reinforce existing problematic power structures, or redefine them as ‘privacy’ problems, so that PPCs can be positioned as the solution, while leaving those structures intact
• • •
Missing some Tweet in this thread? You can try to
force a refresh
At the beginning of 2020 I was tired by the 'AI ethics' discourse. But by the end of the year, I'm feeling inspired and awed by the bravery, integrity, skill and wisdom of those who've taken meaningful political action against computationally-mediated exploitation and oppression.
The conversation has moved from chin-stroking, industry-friendly discussion, towards meaningful action, including worker organising, regulation, litigation, and building alternative structures. But this move from ethics to praxis inevitably creates fault lines between strategies.
Should we work to hold systems to account 'from the inside'? Legislate and enforce regulation from outside? Resist them from the ground up? Build alternative socio-technical systems aligned with counterpower?
Looking forward to reading this (recommendation of @gileslane), by the late Mike Cooley, engineer, academic, shop steward, activist behind the Lucas plan en.m.wikipedia.org/wiki/Mike_Cool…
NB: this book (from 1980) actually coined '*human-centred* systems', as an explicitly socialist and socio-technical political movement centering the needs of the people who make and use technology. A far cry from the kind of human-centred design critiqued by Don Norman (2005)
Some highlights:
Some like to think computers should do the calculating, while people do the creativity and value judgements. But the two can't just be combined "like chemical compounds". It doesn't scale.
Thread on possible implications of #SchremsII for end-to-end crypto approaches to protecting personal data. Background: last week the (CJEU) issued its judgment in Case C-311/18, “Schrems II”. Amongst other things, it invalidates Privacy Shield, one of the mechanisms
enabling transfers from EU-US. This was in part because US law lacks sufficient limitations on law enforcement access to data, so the protection of data in US not 'essentially equivalent' to that in the EU. Similar arguments could apply elsewhere (e.g. UK).
The main alternative mechanism enabling transfers outside the EEA is the use of 'standard contractual clauses' (SCCs) under Article 46(2)(c) GDPR. But the Court affirmed that SCCs also need to ensure 'essentially equivalent' protection.
New paper: 'On the Apparent Conflict Between Individual and Group Fairness' accepted at @fatconference, (now up on arxiv.org/abs/1912.06883). The paper addresses a distinction drawn between two broad approaches to measuring fairness in machine learning
'individual fairness' measures compare individuals, e.g. people 'similar' according to some task-relevant metric should get the same outcome. 'Group fairness' measures compare protected groups (e.g. gender, race, age) for differences in errors/outcomes/calibration/etc.
As typically presented, they reflect different + conflicting normative principles: group fairness seems to ensure egalitarian equality, whereas individual fairness seems to ensure consistency (the more/less qualified an individual, the better/worse their outcome).