At long last, we are thrilled to share this new report, "Assembling Accountability: Algorithmic Impact Assessment for the Public Interest" with everyone. A lot of wonderful collaborative work went into this project and it will shape the AIGI team's efforts for the next few years.
We started with the question: why do people use the term "AIA" to refer to so many distinct processes with sometimes conflicting purposes? To answer this, we studied impact assessment regimes in other areas, such as environmental, fiscal, privacy and human rights.
We identified 10 "constitutive components" of impact assessment. All mature impact assessment regimes have an answer to these 10 features—but many proposals or existing practices for algo impact assessment do not address the full list. Image
But that's ok, IA regimes evolve organically over time in response to changing science, community pressure, measurement practices, and judicial outcomes. Evolving consensus is integral to IA practices because ... Image
"Impacts" are a weird type of evaluative proxy that only emerges from a community that needs a common object to work together and make use of commensurable measurements. You can't just find an "impact." "Impacts" and "accountability" are co-constructed. doi.org/10.1145/344218…
In the ideal case, algorithmic "impacts" would be a close proxy to harms experienced by people subject to those systems. However, that is an enormous challenge for algorithmic systems, where harms can be distributed and non-obvious, and very distant from developers. Image
Thus, we argue that the major potential failure point of any AIA process/regulation is that the methods used to construct "impacts" could be too far removed from actual, lived harms of people subject to these systems. points.datasociety.net/assembling-acc…
And, as we argue in our supplementary policy brief, policy makers involved in algorithmic regulation need to build in opportunities for public consultation and access as a core priority, otherwise industry capture of methods is inevitable. datasociety.net/library/assemb…
On a personal note, this project has been a collaborative pleasure/haul. In the midst of evacuation from wildfires, I took over the helm as the ever-capable @m_c_elish departed for new digs. I couldn't have a better team than @MannyMoss, @watkins_welcome and Ranjit Singh.
And of course nothing comes out of Data & Society without the contributions of our brilliant editorial, design, comms, policy and engagement teams.
@latonero was also an early contributor on the AIGI team, and his recent work on HRIAs was very informative to us. carrcenter.hks.harvard.edu/publications/h…
And former AIGI member @aselbst's new paper on how to coordinate AIAs around industry practices is highly complementary to our work, which is unsurprising since he started us down this path. papers.ssrn.com/sol3/papers.cf…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jacob 'Hereby' Metcalf

Jacob 'Hereby' Metcalf Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @undersequoias

7 Aug 19
Well, this looks mighty awful for the future of algorithmic accountability.

Others know a lot more about disparate impact legal matters, so I'll comment on the applied tech ethics side of it. revealnews.org/article/can-al…
2/ When I work with tech corps, I'm often stating that the practical effect of algorithmic governance is the retention of context. Everything about machine learning technical systems motivates toward the stripping of context, and without context you can't have ethical reasoning.
3/ Governance artefacts, like impact reports or product requirement documents, function to retain a narrative about how the product was built, including ethically relevant information, such as what features were used to build the model, how bias was tested, etc.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(