Discover and read the best of Twitter Threads about #fat2020

Most recents (10)

🏅 And the #FAT2020 Best Paper Awards... 🏅1/
Best CS paper: Kate Donahue and Jon Kleinberg, Fairness and Utilization in Allocating Resources with Uncertain Demand dl.acm.org/doi/abs/10.114… 2/
Best non-CS track paper: What does it mean to ‘solve’ the problem of discrimination in hiring? Javier Sánchez-Monedero, @LinaDencik and @lilianedwards dl.acm.org/doi/abs/10.114… 3/
Read 6 tweets
Next up excited for @KLdivergence with a risk assessment tool analysis on “overbooking”. This is when someone is arrested on charges that are more serious than they should be. First off: there’s very little accountability so this is gonna be a hard problem to address. #FAT2020
Guess what? There is a lot of racial disparity in this. I’ll say... But again, this is hard to address because there’s no real “ground truth”. It’s complicated and we can attack the issue by looking at actual convictions vs. accusations.
So let’s look at the pre-trial risk assessment algos. One in particular in the USA is made of many models that output a score. Green = freedom. Red = you go to jail. Image
Read 7 tweets
Next up, something I’m very interested in: bias in hiring algorithms. Can it be made better? #FAT2020 with @manish_raghavan we’re gonna find out.
Looking at vendors like HireVue and others, specifically all their publicly available information on steps they’ve taken to “de-bias”.
One in particular, a video interview platform, scores people on how they answer questions. This is bad, but it’s not the only bad thing in the space (ugh, lol). Another example grades your online behavior playing a little game....... to predict your job performance???
Read 7 tweets
Talk by Chris Sweeney at #FAT2020 on "Reducing sentiment polarity for demographic attributes in word embeddings using adversarial learning," with @Maryam_Najafian. Image
There are several types of bias encoded in language models, and this paper focuses on sentiment bias, where certain identity terms encode a more positive sentiment than others. #FAT2020 ImageImage
Various papers have studied the different possible sources of this bias, and this paper focuses on the word vectors themselves. #FAT2020 Image
Read 9 tweets
Quote of the day at #FAT2020: Image
So far my key takeaway from this conference is that anthropologists are absolutely vital to this process. A call to “study up” is to contextualize a process by studying its situation within systems. Image
Our work (in data) is tangled in larger social and political struggles. Tendency is to gaze “down” focusing on the powerless for “social good”. It happens (as an example) when we try to study criminal tendencies rather than the bias of the policing systems that criminalize.
Read 8 tweets
Ok now how do we make it more fair? Kit Rodolfa at #FAT2020: recidivism algorithms. Image
Too true: anything you optimize for will have a down side. In this case, let’s look at recall optimization. In other words, equal rates of finding those who will actually re-offend, regardless of race (I think? Might have that backwards...)
Surprise! One size fits all doesn’t work. Perhaps different thresholds will improve equity to cater to different groups who have different needs? Image
Read 5 tweets
Exciting to see more discussion of protected classes as counterfactual (just changing M to F isn’t enough to realistically audit discriminatory processes) #FAT2020 Image
One potential solution “FlipTest” maps individuals to their reflected alter of a different gender (or other demographic trait) in a comparable distribution of individuals. Interesting. Image
Auditing: look at distributions of subgroups and those who would gain a good outcome in one case and bad in another (and vice versa)... ‼️ Image
Read 3 tweets
Fairness now at #FAT2020 starting off with a really important question: what do you do if you don’t collect demographic data? Image
Proxy methods like BISG can be used to infer the category, but that means that all auditing is vulnerable to the particularities/problems in the proxy method.
Maybe you have access to an auxiliary dataset like a census file. How effective would that be? Image
Read 5 tweets
Explainability up next (!) at #FAT2020 with @JcMalgieri and @MargotKaminski paper on GDPR and impact assessments. Image
No public consensus on the Right to Explanation but lawyers wanted automated decision making to be justified based on fairness. Image
Data controllers must assess risks since GDPR states that higher risk cases require higher safeguards. Here are some examples of duties: Image
Read 5 tweets
Next up: the brilliant @rajiinio!! #FAT2020 Image
Data sheets and model cards aren’t enough. Design decisions must be examined. Especially in large, complex systems with multiple algorithms. Image
Too much to cover today but READ THE PAPER! Failure modes and effect up next. Image
Read 5 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!