Happening now: the book launch of "Your Computer is on Fire", which is an anthology of essays on technology and inequity, marginalization, and bias.
@tsmullaney with opening remarks on how this *four and a half* year journey has been an incredibly personal one.
I can't believe it's been four years!! I remember attending the early Stanford conferences that led to the completion of this book. At the time I think I was just returning from NYC to Oakland... so much has changed since then, in the world & this field, truly.
@histoftech: "As Sarah Roberts (@ubiquity75 ) shows in her chapter in this book, the fiction that platforms that are our main arbiters of information are also somehow neutral has effectively destroyed the public commons"
@histoftech: "As Safiya Noble (@safiyanoble) has shown in Algorithms of Oppression, trusting an advertising corporation to be a neutral purveyor of information, when their profits depend on manipulating that information, fundamentally misunderstands our capitalist marketplace"
Mar's current opening emphasizes how tech companies have become governments of their own "by virtue of their wealth, power, and reach" ... they tell us they can police themselves, but "as a historian, I can tell you that this never works" 🔥🔥🔥
@histoftech: "Even when it loses money, a broken system that consolidates more power will not be discarded ... because oppression is about power, as much as it is about profit."
@tsmullaney noted in his intro that one of the talks that made him think, "we need to begin [this work] again," was that of @Halcyon_L on accent bias in technological systems. I've been thinking about her work for a few years now, and I'm beyond excited to read her chapter!!
@Halcyon_L: "If you possess a foreign accent or speak in a dialect, speech technologies practice a form of othering, that is biased and demands a form of post-colonial assimilation to standard accents that silences the speaker's sociohistorical reality" 🔥
@Halcyon_L describes spending time with a friend in Trinidad who was an iPhone user, who first used a Trinidadian accent to Siri, but it could not understand her. But once switching to an American accent, Siri did. A form of assimilation & alienation in using this emerging tech.
Sreela Sarkar has a chapter on ethnographic work on the skill training programs in India & how "the promise of flattening of inequalities will not set you free".
SS: "Like the majority of skills training programs, directed at marginalized youth in contemporary India, this program produces precarious and low paid workers at the fringes of the information economy" still trapped within "capitalist boundaries of productivity"
Ben Peters is now reading from two chapters for folks who couldn't make the call, the first from @safiyanoble's chapter, "Your Robot Isn't Neutral" and the second from @JanetAbbateVT's "Coding is Not Empowerment."
Safiya via Ben: "We have very little regulation about human-robotics interaction, as our legal and political regimes are woefully out of touch with the ways in which social relations will be transformed with the introduction of robotics"
Safiya's chapter notes that even at leading conferences on the future of robotics, concerns for well-developed public policy or oversight have largely gone unaddressed. "We have to ask what is lost, who is harmed."
Janet via Ben: "Superficial claims that learning to code will automatically be empowering can in fact mask, a lack of commitment to structural change", instead pushing folks into a biased pipeline that "reinforces the narrow conceptions of merit and skill"
Janet's chapter demands that companies can do more right now to increase diversity, that we need to put more pressure on these companies to hire and promote women and minorities into senior positions. "Real solutions may be quite uncomfortable changes"
@JanetAbbateVT: "Unless there is a systematic realignment of opportunities, rewards, and values in the tech industry, training individual women and minorities does little to shift the culture. Coding by itself is not empowerment"
@bjpeters reads from his afterword and mentions that the internet is turning 50 & it's a fitting moment to stage an intervention. Says this book is a call for a collective language that we should act on with seriousness and urgency.
@bjpeters: "The challenge of anyone who lives in our broken world is not to delay to some future date the fact that the needs of the many outweigh the privileges of the few here and now" ...understand that "your computer is on fire, or the world as we know it, will burn up first"
fjsmfids I truly can't tweet as quickly as the presenters are going 😂😭 just sitting with so many nuggets of **knowledge** and **wisdom** being dropped at once, these are all readings from the book itself!! If you like what you hear, get the book!!
@ubiquity75: "There's also an underlying presupposition almost always at play that suggests tacitly & otherwise, that the dehumanized & anonymous decision making done by computers in a way that mimics but replaces that of human actors is somehow intrinsically more just, or fair."
@ubiquity75: "If we can all agree that humans are fallible, then why is artificial intelligence based on human input values and judgments then applied at scale and with little to no means of accountability"
@ubiquity75 points at the fact that the deployment of these artificial intelligence systems boil down to questions of labor and cost saving for these corporations, to control labor organizing and reduce the impact of collective action due to smaller numbers of employees
@nensmenger points out that what generates Amazon the most profits are its cloud services, which we don't often hear about, and are often rendered invisible, due to the equipment and labor being located elsewhere.
@nensmenger: "The cloud is not only a metaphor and a marketing tool, but also a technology for obscuring the true costs materiality and labor required to enable the digital economy." Notes a data center needs 350-500 MW of power and 400K gallons of water *daily* to operate!!
@nensmenger: "the metaphor of the cloud allows the computer industry to conceal, and or externalize, a whole host of problems, from energy costs to the abuse of workers to e-waste and pollution" (literally setting the earth on fire 🔥🔥🔥)
Kavita Phillip: "This is not just a local fire as these chapters demonstrate. It's raging around the world. It has been building for a long time, and it is consuming us." (1/2)
Kavita Phillip: "This is not just a technology fire, it is a resource fire that threatens the planet, it is a psychological fire that implicates our desires and fears and it is a cultural fire that enlists our identities and our communities." (2/2) 🔥🔥🔥
Kavita Phillip: "Technology did not invent oppression, historical & justices are being reshaped revivified and redefined by technology." ...moves us to look at the history of social justice activism, at community movements for unionizing, at movements for justice & accountability
So many great takes in the post-discussion. On localization, the QWERTY keyboard, speech technologies as neocolonialism. On how the government can lean on private corporations to create a surveillance state that otherwise would have been illegal and unconstitutional.
Great point from @safiyanoble on how the AI arms race that the US faces is different from the one that China faces, as there are very different paradigms, and "the flattening of conversation is part of the danger" when we don't recognize the different ends and means
A lot of really common threads, on private companies as essentially a branch of the government, on the importance of labor involved in conversation on tech creation, on the domination of the power of tech companies globally, re: roles in oppression, surveillance, detention
Paul Edwards: "Facebook and Google have more 'citizens' than some countries in the world ... no national democracy has any hope of competing" esp with the *speed* at which these companies grow (and can be replaced)
@tsmullaney: "We made it a point to stop being polite about our expertise" 🔥 calls back to @bjpeters' earlier point about how this book is "no-nonsense", that it pulls no punches about what's at stake
"The time for equivocation is over"
@histoftech talks about writing their intro with their students in mind, how they used to be the radical one in the room, but now the students in the room. Their advice? Organize. Points to labor organizing across class lines that they're really gratified to see today.
@histoftech cites the need for a pincer move: labor organizing from the bottom, and policy and regulation from the top. Also notes that there are different ways to resist that we can do together, such as even just working slowly to prevent fatalities.
This was an incredibly invigorating book launch / panel / discussion, and I'm so thankful to have been able to attend it!
A reminder for folks that "Your Computer Is On Fire" is now sold wherever books are sold, and be sure to grab a copy today!!
Excited for this final keynote! For those outside of the know, Julia Angwin was the journalist who broke the "Machine Bias" article with ProPublica that just about everyone in this field now cites. She also founded The Markup & is the EIC there. Her work has been field-changing.
@JuliaAngwin is talking about how The Markup does things differently, emphasizing building trust with the readers. By writing stories and showing their analysis work, but also through a privacy promise, not tracking *anything* about people who visit their website. No cookies!
@JuliaAngwin: "We don't participate in the game that is pretty common in Silicon Valley .... we don't think someone who gets paid to be a spokesperson for an organization deserves the cloak of anonymity. That's what we do differently from other journalists they might talk to."
On the last-minute changing of the name: "Rather than say the ways that we would like to deviate from the inevitable, we want to talk about the ways in which the implications of the future are up for grabs." - @alixtrot 🔥🔥
.@schock tells us to "put our money where our mouth is" and sign up for and support the Turkopticon organizing effort to help support Amazon Mechanical Turk workers:
.@cori_crider talks about Prop 22 here in CA, which companies like Uber spent $200M on in order to encode into law that drivers are not employees. "Having secured that victory, they're seeking to roll out that model in other legislatures." "That is Uber's vision of the future."
Let's goooo!!! The second of two papers on AI education is coming up in a bit. As an AI educator focused on inclusion and co-generative pedagogy, I'm *really* excited for this talk on exclusionary pedagogy. Will tweet some take-aways in this thread:
First, a mention for those who don't know, I've been a CS educator since 2013, and in 2017 I moved into specifically being an AI educator, focusing on inclusive, accessible, and culturally responsive high school curriculum, pedagogy, and classroom experiences. Informs my POV
.@rajiinio starts the talk off by mentioning that there's an AI ethics crisis happening & we're seeing more coverage of the harms of AI deployments in the news. This paper asks the question, "Is CS education the answer to the AI ethics crisis, or actually part of the problem?" 🤔
This is one of my favorite papers at #FAccT21 for sure, and I highly recommend folks watch the talk and read the paper if they can! Tons of nuggets of insight, was so busy taking notes that I couldn't live-tweet it. Here are some take-aways, though:
The paper looked at racial categories in computer vision, motivated by looking at some of the applications of computer vision today.
For instance, face recognition is deployed by law enforcement. One study found that these "mistook darker-skinned women for men 31% of the time."
They ask, how do we even classify people by race? If this is done just by looking at geographical region, Zaid Khan argues this is badly defined, as these regions are defined by colonial empires and "a long history of shifting imperial borders". 🔥🔥
First paper of session 22 at #FAccT21 is on "Bias in Generative Art" with Ramya Srinivasan. Looks at AI systems that try to generate art based on specific historical artists' styles, but using causal methods, analyzes the biases that exist in the art generation.
They note: It's not just racial bias that emerges, but also bias that stereotypes the artists' styles (e.g., reduction of their styles to use of color) which doesn't reflect their true cognitive abilities. Can hinder cultural preservation and historical understanding.
Their study looks at AI models that generate art mainly in the style of Renaissance artists, with only one non-Western artist (Ukiyo-e) included. Why, you might ask?
There are "no established state-of-the-art models that study non-Western art other than Ukiyo-e"!!
Last talk for this #FAccT21 session is "Towards Cross-Lingual Generalization of Translation Gender Bias" with Won Ik Cho, Jiwon Kim, Jaeyoung Yang, Nam Soo Kim.
Remember the Google translate case study that added sexist gender pronouns when translating? This is about that.
Languages like Turkish, Korean, Japanese, etc. use gender-neutral pronouns, but when translating to languages like English, often use gender-specific pronouns. But also, languages like Spanish and French, have gendered *expressions* as well to keep in mind.
This matters because existing translation systems could contain biases that could generate translated results that are offensive and stereotypical, and not always accurate.
Note that not all languages have colloquially used gender neutral pronouns (like the English "they").