Tech/data/AI ethics researcher and consultant. Co-Founder @ethicalresolve, researcher @datasociety and @pervade_team. Works from #officebarn in the redwoods.
Aug 20, 2021 • 7 tweets • 2 min read
One year ago, I heard the local CalFire command say "you're on your own, no one is coming to help." So, I drove through a Sheriff barricade at 4AM and worked with a friend to save my house, my neighbors' houses, and his son's kitty. I had a shovel and a rake.
I'm still pretty fucked up about it, honestly. This picture spikes my heart rate, I feel nauseated rn. I had it easy compared to people who kept watch for weeks, alone or in small teams, harassed by cops at gunpoint and denigrated by feckless CalFire command in news conferences.
Jun 29, 2021 • 12 tweets • 5 min read
At long last, we are thrilled to share this new report, "Assembling Accountability: Algorithmic Impact Assessment for the Public Interest" with everyone. A lot of wonderful collaborative work went into this project and it will shape the AIGI team's efforts for the next few years.
We started with the question: why do people use the term "AIA" to refer to so many distinct processes with sometimes conflicting purposes? To answer this, we studied impact assessment regimes in other areas, such as environmental, fiscal, privacy and human rights.
Aug 7, 2019 • 12 tweets • 3 min read
Well, this looks mighty awful for the future of algorithmic accountability.
Others know a lot more about disparate impact legal matters, so I'll comment on the applied tech ethics side of it. revealnews.org/article/can-al…2/ When I work with tech corps, I'm often stating that the practical effect of algorithmic governance is the retention of context. Everything about machine learning technical systems motivates toward the stripping of context, and without context you can't have ethical reasoning.