At the beginning of 2020 I was tired by the 'AI ethics' discourse. But by the end of the year, I'm feeling inspired and awed by the bravery, integrity, skill and wisdom of those who've taken meaningful political action against computationally-mediated exploitation and oppression.
The conversation has moved from chin-stroking, industry-friendly discussion, towards meaningful action, including worker organising, regulation, litigation, and building alternative structures. But this move from ethics to praxis inevitably creates fault lines between strategies.
Should we work to hold systems to account 'from the inside'? Legislate and enforce regulation from outside? Resist them from the ground up? Build alternative socio-technical systems aligned with counterpower?
It's easy to dismiss efforts of others as too naive or too cynical, too radical or not radical enough. Having been peripherally involved with various shades of progressive politics over the years, I've found myself on both sides of such arguments.
Each approach has its shortcomings, and the power to pursue them is not equally distributed; it typically requires navigating hostile institutional structures (in industry, academia, and elsewhere) that favour the already-privileged, and discourage critical work.
But configured in the right way, and embedded in broad-church political movements, these different strategies can be mutually supporting. Tech worker walkouts might stop a particular AI contract, but allying with broader campaigns could lead to a general ban (e.g. on FRT).
Likewise, even if current regulation isn't strong enough to stop harmful uses of tech outright, if robustly enforced it might put enough barriers in place to give activists a fighting chance to organise against it, or help alternative structures (eg. platform coops) to flourish
External audits of discriminatory algorithms by researchers and investigative journalists alone might not convince companies to abandon them, but could spark litigation that would force them to.
Legal limits on data collection and AI development will not lead to data justice on their own, but they might substantially limit the potential damage authoritarian fascists - domestic and foreign - can do with such technologies when they gain power.
To borrow loosely from the late EO Wright, we need to combine multiple strategic logics: neutralising harmful technologies in ways that better enable us to transcend the structures they support; transcending them in ways that help neutralise their harms (jacobinmag.com/2015/12/erik-o…).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Looking forward to reading this (recommendation of @gileslane), by the late Mike Cooley, engineer, academic, shop steward, activist behind the Lucas plan en.m.wikipedia.org/wiki/Mike_Cool…
NB: this book (from 1980) actually coined '*human-centred* systems', as an explicitly socialist and socio-technical political movement centering the needs of the people who make and use technology. A far cry from the kind of human-centred design critiqued by Don Norman (2005)
Some highlights:
Some like to think computers should do the calculating, while people do the creativity and value judgements. But the two can't just be combined "like chemical compounds". It doesn't scale.
Thread on possible implications of #SchremsII for end-to-end crypto approaches to protecting personal data. Background: last week the (CJEU) issued its judgment in Case C-311/18, “Schrems II”. Amongst other things, it invalidates Privacy Shield, one of the mechanisms
enabling transfers from EU-US. This was in part because US law lacks sufficient limitations on law enforcement access to data, so the protection of data in US not 'essentially equivalent' to that in the EU. Similar arguments could apply elsewhere (e.g. UK).
The main alternative mechanism enabling transfers outside the EEA is the use of 'standard contractual clauses' (SCCs) under Article 46(2)(c) GDPR. But the Court affirmed that SCCs also need to ensure 'essentially equivalent' protection.