At long last, we are thrilled to share this new report, "Assembling Accountability: Algorithmic Impact Assessment for the Public Interest" with everyone. A lot of wonderful collaborative work went into this project and it will shape the AIGI team's efforts for the next few years.
We started with the question: why do people use the term "AIA" to refer to so many distinct processes with sometimes conflicting purposes? To answer this, we studied impact assessment regimes in other areas, such as environmental, fiscal, privacy and human rights.
We identified 10 "constitutive components" of impact assessment. All mature impact assessment regimes have an answer to these 10 features—but many proposals or existing practices for algo impact assessment do not address the full list.
But that's ok, IA regimes evolve organically over time in response to changing science, community pressure, measurement practices, and judicial outcomes. Evolving consensus is integral to IA practices because ...
"Impacts" are a weird type of evaluative proxy that only emerges from a community that needs a common object to work together and make use of commensurable measurements. You can't just find an "impact." "Impacts" and "accountability" are co-constructed. doi.org/10.1145/344218…
In the ideal case, algorithmic "impacts" would be a close proxy to harms experienced by people subject to those systems. However, that is an enormous challenge for algorithmic systems, where harms can be distributed and non-obvious, and very distant from developers.
Thus, we argue that the major potential failure point of any AIA process/regulation is that the methods used to construct "impacts" could be too far removed from actual, lived harms of people subject to these systems. points.datasociety.net/assembling-acc…
And, as we argue in our supplementary policy brief, policy makers involved in algorithmic regulation need to build in opportunities for public consultation and access as a core priority, otherwise industry capture of methods is inevitable. datasociety.net/library/assemb…
On a personal note, this project has been a collaborative pleasure/haul. In the midst of evacuation from wildfires, I took over the helm as the ever-capable @m_c_elish departed for new digs. I couldn't have a better team than @MannyMoss, @watkins_welcome and Ranjit Singh.
And of course nothing comes out of Data & Society without the contributions of our brilliant editorial, design, comms, policy and engagement teams.
And former AIGI member @aselbst's new paper on how to coordinate AIAs around industry practices is highly complementary to our work, which is unsurprising since he started us down this path. papers.ssrn.com/sol3/papers.cf…
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Well, this looks mighty awful for the future of algorithmic accountability.
Others know a lot more about disparate impact legal matters, so I'll comment on the applied tech ethics side of it. revealnews.org/article/can-al…
2/ When I work with tech corps, I'm often stating that the practical effect of algorithmic governance is the retention of context. Everything about machine learning technical systems motivates toward the stripping of context, and without context you can't have ethical reasoning.
3/ Governance artefacts, like impact reports or product requirement documents, function to retain a narrative about how the product was built, including ethically relevant information, such as what features were used to build the model, how bias was tested, etc.