On the last-minute changing of the name: "Rather than say the ways that we would like to deviate from the inevitable, we want to talk about the ways in which the implications of the future are up for grabs." - @alixtrot 🔥🔥
.@schock tells us to "put our money where our mouth is" and sign up for and support the Turkopticon organizing effort to help support Amazon Mechanical Turk workers:

.@cori_crider talks about Prop 22 here in CA, which companies like Uber spent $200M on in order to encode into law that drivers are not employees. "Having secured that victory, they're seeking to roll out that model in other legislatures." "That is Uber's vision of the future."
.@cori_crider: "The whole transport economy has been Uberized, and we in the UK have a unique opportunity to bring those companies within some measure of the rule of law. Minimum wage and holiday pay are two concrete consequences. That's one future."
On content moderation, @cori_crider mentions "the rules developed overwhelmingly by white middle-class male Americans are enforced by a completely different class of tech workers" (Look at @ubiquity75's "Behind The Screen" for research on exactly this)

yalebooks.yale.edu/book/978030023…
@VidushiMarda: "One of the most invisible structural challenges that we have as researchers on the ground ... trying to make tech less bad, when I think a lot of the work should be not to have some technologies on the ground at all."
@VidushiMarda: "You're testing & perfecting problematic technology in one side of the world & stalling it for the other kind of the world until it is good enough to be imposed on those people. It creates a deconstructive narrative around what regulation & protection looks like."
@schock channels @YESHICAN ("any intervention that doesn't build political power or agency of the marginalized community is libel to harm rather than the help") and the US disability justice movement ("nothing about us without us") in introducing the design justice framework. YES
@schock: "How are we going to design systems and processes that are constantly working to dismantle the matrix of domination (a term from Black feminist scholar Patricia Hill Collins for the intersecting systems of racism, patriarchy, capitalism, and ableism)."
Btw, @schock just mentioned the Design Justice Network and their principles for "envisioning a future where design is used to support care, healing, liberation, joy, and sustainability." I love the folks at DJN: Sasha, Una, Wesley, Denise, check them out:

designjustice.org
@alixtrot points out a really good thread about the conflict between liberatory thinking and optimization thinking, about how "optimization traps us into certain logics and makes us less imaginative about processes as inclusive as they need to be" Really great insight here.
@cori_crider raises an interesting point: "The risk you run in doing those kind of procedural challenges--let's say accuracy in facial recognition--you legitimize the system and therefore make it easier to survive." -vs- we just say it's inherently problematic & we don't want it
@schock, pushing back slightly: "Gender Shades is such a crucial piece of ammunition for movement organizers and activists who have managed to successfully pass a range of moratoria and bans at state level and potentially federal level"
@schock: "I think it's a mistake to see [optimization and liberation] in binary opposition. How does a particular piece of research move and how does it get used by broader movements, whether those movements are explicitly abolitionist or not?"
@schock, on the values of redefining optimization questions for our 'technical' community: "What are we optimizing for? We could be optimizing for strong consent in AI and machine learning. That has many components that are technically challenging."
@schock mentions there's a whole converesation about what it would mean to translate questions of consent to the technical space, and points at the Consentful Tech Project (which I've actually been thinking *a lot* about in my work). Highly recommend:

consentfultech.io
@cori_crider mentions context in the public sector, and how government "hides contestable policy choices behind a technical veneer", pointing at the UK grading algorithm that was a huge political mess.

theverge.com/2020/8/17/2137…
@cori_crider: "It's on all of us to say, uh-huh, it doesn't have to be this way. To go to Sasha's point, you optimized for what? If you ask people in the A-level, well, we optimized for maintaining the curve instead of individual fairness. People would have said, no, thank you."
@VidushiMarda discusses exclusion that happens with pushback: by the time academia or civic government is able to push back, it's already late in the process and it's often after these technologies are deployed by these corporations. And this happens internationally.
@VidushiMarda: "When it comes to pushback, it's almost like you're given a losing hand. Everything is already decided and done. You can send in comments in 30 days, but you don't know whether they're being read." 👀👀👀
@VidushiMarda: "I think thinking about power more critically, not just in terms of how technologies are used and who they're used on, but how we talk about technologies, where we focus our energy on technologies would be excellent."
@schock: "We need to shift the conversation from fairness to justice and equity. We have to shift the conversation from bias to harm. I encourage everyone to check out the film "Coded Bias" that helps do that."

codedbias.com

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with local oakland enby @ #FAccT21 (pls mute me)

local oakland enby @ #FAccT21 (pls mute me) Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @WellsLucasSanto

10 Mar
Excited for this final keynote! For those outside of the know, Julia Angwin was the journalist who broke the "Machine Bias" article with ProPublica that just about everyone in this field now cites. She also founded The Markup & is the EIC there. Her work has been field-changing.
@JuliaAngwin is talking about how The Markup does things differently, emphasizing building trust with the readers. By writing stories and showing their analysis work, but also through a privacy promise, not tracking *anything* about people who visit their website. No cookies!
@JuliaAngwin: "We don't participate in the game that is pretty common in Silicon Valley .... we don't think someone who gets paid to be a spokesperson for an organization deserves the cloak of anonymity. That's what we do differently from other journalists they might talk to."
Read 18 tweets
10 Mar
Let's goooo!!! The second of two papers on AI education is coming up in a bit. As an AI educator focused on inclusion and co-generative pedagogy, I'm *really* excited for this talk on exclusionary pedagogy. Will tweet some take-aways in this thread:
First, a mention for those who don't know, I've been a CS educator since 2013, and in 2017 I moved into specifically being an AI educator, focusing on inclusive, accessible, and culturally responsive high school curriculum, pedagogy, and classroom experiences. Informs my POV
.@rajiinio starts the talk off by mentioning that there's an AI ethics crisis happening & we're seeing more coverage of the harms of AI deployments in the news. This paper asks the question, "Is CS education the answer to the AI ethics crisis, or actually part of the problem?" 🤔
Read 25 tweets
10 Mar
This is one of my favorite papers at #FAccT21 for sure, and I highly recommend folks watch the talk and read the paper if they can! Tons of nuggets of insight, was so busy taking notes that I couldn't live-tweet it. Here are some take-aways, though:
The paper looked at racial categories in computer vision, motivated by looking at some of the applications of computer vision today.

For instance, face recognition is deployed by law enforcement. One study found that these "mistook darker-skinned women for men 31% of the time."
They ask, how do we even classify people by race? If this is done just by looking at geographical region, Zaid Khan argues this is badly defined, as these regions are defined by colonial empires and "a long history of shifting imperial borders". 🔥🔥
Read 15 tweets
10 Mar
First paper of session 22 at #FAccT21 is on "Bias in Generative Art" with Ramya Srinivasan. Looks at AI systems that try to generate art based on specific historical artists' styles, but using causal methods, analyzes the biases that exist in the art generation.
They note: It's not just racial bias that emerges, but also bias that stereotypes the artists' styles (e.g., reduction of their styles to use of color) which doesn't reflect their true cognitive abilities. Can hinder cultural preservation and historical understanding.
Their study looks at AI models that generate art mainly in the style of Renaissance artists, with only one non-Western artist (Ukiyo-e) included. Why, you might ask?

There are "no established state-of-the-art models that study non-Western art other than Ukiyo-e"!!
Read 4 tweets
9 Mar
Happening now: the book launch of "Your Computer is on Fire", which is an anthology of essays on technology and inequity, marginalization, and bias.

@tsmullaney with opening remarks on how this *four and a half* year journey has been an incredibly personal one.
I can't believe it's been four years!! I remember attending the early Stanford conferences that led to the completion of this book. At the time I think I was just returning from NYC to Oakland... so much has changed since then, in the world & this field, truly.
@histoftech: "As Sarah Roberts (@ubiquity75 ) shows in her chapter in this book, the fiction that platforms that are our main arbiters of information are also somehow neutral has effectively destroyed the public commons"
Read 37 tweets
9 Mar
Last talk for this #FAccT21 session is "Towards Cross-Lingual Generalization of Translation Gender Bias" with Won Ik Cho, Jiwon Kim, Jaeyoung Yang, Nam Soo Kim.

Remember the Google translate case study that added sexist gender pronouns when translating? This is about that.
Languages like Turkish, Korean, Japanese, etc. use gender-neutral pronouns, but when translating to languages like English, often use gender-specific pronouns. But also, languages like Spanish and French, have gendered *expressions* as well to keep in mind.
This matters because existing translation systems could contain biases that could generate translated results that are offensive and stereotypical, and not always accurate.

Note that not all languages have colloquially used gender neutral pronouns (like the English "they").
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!