The 5 principles in the #aibillofrights are common sense: systems should work, they shouldn't discriminate, they shouldn't use data indiscriminately, they should be visible and easy to understand, and they shouldn't eliminate human interlocutors. whitehouse.gov/ostp/news-upda…
🧵
1/n
But there's a lot going on "under the hood". Let's unpack them one by one.

Today: Safe and Effective Systems

The principle is clear: Rights-impacting systems should work, and work well. But what does that really mean? Don't the systems we deploy work already? 2/n
Sadly, no. A great paper by @rajiinio, @ziebrah,@Aaron_Horowitz , and @aselbst called "The Fallacy of AI Functionality" unpacks all the ways in which automated systems claim to work but don't. dl.acm.org/doi/10.1145/35…
3/n
@random_walker has written about what he calls "AI Snake Oil" (cs.princeton.edu/~arvindn/talks…) and indeed is now writing a book about it with @sayashk (aisnakeoil.substack.com)
4/n
So automated systems don't always work. And when they fail, there are many consequences, some of which are listed in the "Why this principle is important" section in the Technical Companion to the #AIBillofRights 5/n
So what should we do about it? Because there are many ways in which automated systems can fail to work, or work well, there are many different checks that are needed to ensure that they are safe when deployed. 6/n
A system that isn't designed keeping in mind the needs of the people it will impact will likely not serve the community's needs: so we need to make sure systems are designed with public consultation and input (like we take input for other key decisions)
7/n
Systems should work and continue to work once deployed; and should be rigorously evaluated for different kinds of risks (e.g with the @NIST AI RMF (nist.gov/itl/ai-risk-ma…)) regularly. There MUST be the option of taking the system down and not deploying it altogether. 8/n
Not all harms are foreseeable, granted. But many are. And sometimes it just takes that extra effort to think through potential harms. That effort is worthwhile. 9/n
AI systems in particular feed on data. But is the data the right data? is it just compounding mistakes from other systems that feed into this one? Have you made sure that you're using relevant data that has been established as pertinent to the task at hand?
10/n
How sensitive is the domain that you're working with? If it's very sensitive, you should take extra precautions with the data you use and make sure you really understand where it comes from. We don't have the luxury of just grabbing whatever data comes in handy. 11/n
All of this needs supervision -- aka governance. there have to be processes and decision-making to evaluate the results of testing. I hear all the groans: "you're trapping innovation and slowing it down". If people are being affected, then it's warranted. Speed... kills. 12/n
We have ways of doing this. For e.g. @datasociety put out a very nice guideline for evaluating impact assessments (datasociety.net/library/assemb…) and the GAO had a great report last year (that @kdphd helped co-author before going to @WHOSTP) gao.gov/products/gao-2… 13/n
And it needs independent testing and reporting. It's hard to get excited about transparency requirements. But without it, there's no incentive to actually do a good job on testing and evaluation, and without independent testing there's no incentive to do it right. 14/n
Hopefully this illustrates the key design elements for ensuring that rights-impacting systems are safe and effective. Next up: Algorithmic Discrimination. 15/n

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Suresh Venkatasubramanian

Suresh Venkatasubramanian Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @geomblog

Oct 4
I'm very proud to see the release of the AI Bill of Rights (BoR) today. It all started with a vision articulated by @AlondraNelson46. And is the product of so much hard work and collaboration among so many within @WHOSTP, and within the government.
whitehouse.gov/ostp/news-upda… 1/n
There were thousands of edits and comments that made the document strong, rich, and detailed. The AI Bill of Rights reflects, as befits the title, a consensus, broad, and deep American vision of how to govern the automated technologies that impact our lives. 2/n
I'll have more to say about the AI Bill of Rights over the next few days, but there's a conceptual and structural underpinning to the document that's worth highlighting here. 3/n
Read 13 tweets
Sep 18
This is a really tricky scenario that plays out in this story by @kashhill. And one where Clearview appears not as a villain but as a facilitator of justice. What's tricky is the question it raises: does the potential benefit as described here outweigh the harms? 1/n
As with most actual scenarios, this is not a clean one: Clearview is not a neutral entity and the way they built their FR framework is problematic in so many ways. The point that @MusaJumana makes about transparency is also very pertinent. 2/n
And yet it's tempting to ask: what about these "good" use cases after all. Shouldn't we consider them to balance against the bad cases like a classic cost benefit analysis? This is where I feel the dominant paradigm of AI risk assessment is incomplete. 3/n
Read 10 tweets
Mar 12, 2021
I was talking to a Ph.D student recently and they asked me (or at least I understood them to be asking) whether, in the light of the Google fiascos and what we're seeing right now with big tech and AI ethics in general whether there's any point in doing the work that I do. 1/n
I gave a brief answer at the time, to the effect of "the fact that we're seeing pushback means that our efforts are working", which felt a little unsatisfactory to me. But with @_KarenHao's brilliant new article on Facebook I feel like there's a more concrete shift. 2/n
"back in the day", when I first was talking to journalists about AI bias, I remember people saying, "well yeah, but this is all hypothetical. give me a real example where something happened and we'll talk". Similarly, most tech companies were like "AI Bias? Who dat?" 3/n
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(