Reuben Binns @RDBinns@someone.elses.computer Profile picture
computery guy @CompSciOxford, @KelloggOx. HCI; ML; privacy; security; political economy of tech. #THFC south stand 324. he/him RDBinns@someone.elses.computer
Aug 2, 2022 5 tweets 2 min read
New paper: 'Directly Discriminatory Algorithms', with @JeremiasPrassl and @LawAislinn, in Modern Law Review onlinelibrary.wiley.com/doi/full/10.11…

We argue that many paradigmatic examples of algorithmic bias constitute direct discrimination rather than indirect, as often assumed 1/5 Cases of algorithmic bias - like hiring algorithms which downrank women applicants - are classed in US law as disparate impact rather than disparate treatment, because they don't intentionally use protected characteristics (as per @s010n and @aselbst's seminal paper). 2/5
Mar 5, 2021 6 tweets 3 min read
A key misunderstanding about informational privacy (which is old, but now more relevant than ever), is that it is only engaged when information does or could *identify* an individual. 🧵 But identifiability is not a precondition of any major theory of (informational) privacy (e.g. Westin's theory, informational self-determination, @HNissenbaum's contextual integrity). They all focus on information *about* an individual, not whether it can uniquely identify them.
Jan 21, 2021 8 tweets 2 min read
Thread summary of new paper (at #CHI2021): "Exploring Design and Governance Challenges in the Development of Privacy-Preserving Computation" w/ Nitin Agrawal, @emax, Kim Laine & @Nigel_Shadbolt arxiv.org/abs/2101.08048 New techniques for 'privacy-preserving computation'(PPC) (inc. homomorphic encryption, secure multi-party computation, differential privacy) present novel questions for HCI, from usability and mental models, to the values they embed/elide, & their social + political implications
Dec 31, 2020 10 tweets 2 min read
At the beginning of 2020 I was tired by the 'AI ethics' discourse. But by the end of the year, I'm feeling inspired and awed by the bravery, integrity, skill and wisdom of those who've taken meaningful political action against computationally-mediated exploitation and oppression. The conversation has moved from chin-stroking, industry-friendly discussion, towards meaningful action, including worker organising, regulation, litigation, and building alternative structures. But this move from ethics to praxis inevitably creates fault lines between strategies.
Nov 9, 2020 7 tweets 3 min read
Looking forward to reading this (recommendation of @gileslane), by the late Mike Cooley, engineer, academic, shop steward, activist behind the Lucas plan en.m.wikipedia.org/wiki/Mike_Cool… Image NB: this book (from 1980) actually coined '*human-centred* systems', as an explicitly socialist and socio-technical political movement centering the needs of the people who make and use technology. A far cry from the kind of human-centred design critiqued by Don Norman (2005)
Jul 22, 2020 11 tweets 3 min read
Thread on possible implications of #SchremsII for end-to-end crypto approaches to protecting personal data. Background: last week the (CJEU) issued its judgment in Case C-311/18, “Schrems II”. Amongst other things, it invalidates Privacy Shield, one of the mechanisms enabling transfers from EU-US. This was in part because US law lacks sufficient limitations on law enforcement access to data, so the protection of data in US not 'essentially equivalent' to that in the EU. Similar arguments could apply elsewhere (e.g. UK).
Dec 17, 2019 7 tweets 2 min read
New paper: 'On the Apparent Conflict Between Individual and Group Fairness' accepted at @fatconference, (now up on arxiv.org/abs/1912.06883). The paper addresses a distinction drawn between two broad approaches to measuring fairness in machine learning 'individual fairness' measures compare individuals, e.g. people 'similar' according to some task-relevant metric should get the same outcome. 'Group fairness' measures compare protected groups (e.g. gender, race, age) for differences in errors/outcomes/calibration/etc.