Discover and read the best of Twitter Threads about #FAccT21

Most recents (7)

This is one of my favorite papers at #FAccT21 for sure, and I highly recommend folks watch the talk and read the paper if they can! Tons of nuggets of insight, was so busy taking notes that I couldn't live-tweet it. Here are some take-aways, though:
The paper looked at racial categories in computer vision, motivated by looking at some of the applications of computer vision today.

For instance, face recognition is deployed by law enforcement. One study found that these "mistook darker-skinned women for men 31% of the time."
They ask, how do we even classify people by race? If this is done just by looking at geographical region, Zaid Khan argues this is badly defined, as these regions are defined by colonial empires and "a long history of shifting imperial borders". 🔥🔥
Read 15 tweets
First paper of session 22 at #FAccT21 is on "Bias in Generative Art" with Ramya Srinivasan. Looks at AI systems that try to generate art based on specific historical artists' styles, but using causal methods, analyzes the biases that exist in the art generation.
They note: It's not just racial bias that emerges, but also bias that stereotypes the artists' styles (e.g., reduction of their styles to use of color) which doesn't reflect their true cognitive abilities. Can hinder cultural preservation and historical understanding.
Their study looks at AI models that generate art mainly in the style of Renaissance artists, with only one non-Western artist (Ukiyo-e) included. Why, you might ask?

There are "no established state-of-the-art models that study non-Western art other than Ukiyo-e"!!
Read 4 tweets
Last talk for this #FAccT21 session is "Towards Cross-Lingual Generalization of Translation Gender Bias" with Won Ik Cho, Jiwon Kim, Jaeyoung Yang, Nam Soo Kim.

Remember the Google translate case study that added sexist gender pronouns when translating? This is about that.
Languages like Turkish, Korean, Japanese, etc. use gender-neutral pronouns, but when translating to languages like English, often use gender-specific pronouns. But also, languages like Spanish and French, have gendered *expressions* as well to keep in mind.
This matters because existing translation systems could contain biases that could generate translated results that are offensive and stereotypical, and not always accurate.

Note that not all languages have colloquially used gender neutral pronouns (like the English "they").
Read 6 tweets
Talk two of paper session 15 at #FAccT21:
"Spoken Corpora Data, Automatic Speech Recognition, and Bias Against African American Language: The Case of Habitual ‘Be’" by Joshua Martin
Studies have begun to be published about racial linguistic bias, but still not too many. Point to ASR (automatic speech recognition) systems and how a paper came out last year about how 5 major systems (Apple, Amazon, Google, ..) had much higher error rates for Black speakers
This paper specifically looks at the specific case of (the habitual) "be" that is unique to AAVE/AAL; "Angela be studying" is used as an example, pointing out that "Angela is studying" is *not* an adequate or correct equivalent. Speech recognition systems struggle with this.
Read 7 tweets
Starting now is a session on #FAccT21 I'm really looking for today, going to be tweeting a bit!

First talk is called "Differential Tweetment: Mitigating Racial Dialect Bias in Harmful Tweet Detection" by Ari Ball-Burack, Michelle Seng Ah Lee, Jennifer Cobbe, Jatinder Singh
This paper is on detecting harmful tweets & how moderation has been partially automated to machines, but as a result racial biases have emerged. They look specifically at "racial dialect bias"
"The systems systematically classify tweets written in African American English Dialect as more harmful at rates higher than those in 'white' English." ... Twitter can then silently hide these tweets, "which literally could silence the voice of the Black community online"
Read 7 tweets
"Tech has the power to take us one step forward and also ten steps back...race is not a risk factor, racism is." - @YESHICAN on how health risk calculators disadvantaged black people and access to healthcare @FAccTConference #facct21
"Any intervention that does not built the political power, agency, and self determination of those most vulnerable, is more likely to harm than help (on use of data for building tech solutions and systems)" - @YESHICAN at @FAccTConference
"Any solution without the direct involvement of those impacted, is no solution at all. A big question to ask is not just what do we need to do, but who do we need to be in this time of data weaponization," - powerful key note by @YESHICAN at @FAccTConference
Read 5 tweets
Me, listening to the first live-panel from #FAccT21 and as I watch all of the paper talks and Q&A's:
So many good (shocking?!) learnings from this first live-panel. Will tweet more about today's panels (only lightly though) & keep them within this thread! Already so much good content, I can't keep up with both note-taking and tweeting 😂
Related, I'm so thankful for the live captioning and transcription service that FAccT is using this year! As someone who is hard-of-hearing and neurodivergent, being able to read back on the transcription makes *such* a difference.
Read 15 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!