Let's goooo!!! The second of two papers on AI education is coming up in a bit. As an AI educator focused on inclusion and co-generative pedagogy, I'm *really* excited for this talk on exclusionary pedagogy. Will tweet some take-aways in this thread:
First, a mention for those who don't know, I've been a CS educator since 2013, and in 2017 I moved into specifically being an AI educator, focusing on inclusive, accessible, and culturally responsive high school curriculum, pedagogy, and classroom experiences. Informs my POV
.@rajiinio starts the talk off by mentioning that there's an AI ethics crisis happening & we're seeing more coverage of the harms of AI deployments in the news. This paper asks the question, "Is CS education the answer to the AI ethics crisis, or actually part of the problem?" 🤔
They pointed to 3 areas in existing CS pedagogy that led to today's ethics crisis. Emphasizes that a lot of CS pedagogy today is incredibly individualized, aimed with equipping the sole computer scientist merely with tools to "fix" a technical problem as an "individual savior".
They mention a need for more epistemological pluralism, which is lacking in existing education, which values programmatic intelligence. This leads to the trap of "techno-solutionism" (that fixes only need to be technical in nature), which is currently heightened.
For their research, they looked at 254 ethics courses from 132 universities, mostly from CS, but some from HSS (humanities and social sciences). Found "mechanisms of exclusion," that there was a tendency for CS education to push aside HSS to solve the problem on its own.
Through their survey of courses, they found that participants of the CS discipline tried to isolate themselves other disciplines who could actually help, because they didn't value other schools of thought as valid ways of knowing or lack of interest from learning from each other.
@rajiinio, on "mechanisms of exclusion": "Disciplines don't *value* each other's way of knowing ... disciplines don't talk to each other, and there's a lack of translation to understand and collaborate with the other group."
On methodological dogmatism: Only one course even looked at how different disciplines had different strategies or methodologies for arriving at a conclusion or "truth". CS courses might only focus on HSS approaches' weaknesses (not strengths), and not their own weaknesses.
On lack of joint outputs: only 5 courses even allowed for cross-disciplinary teaching. Point to how many of these courses were ABET required or required prohibitive prerequisites which led to them being closed courses available for students of those disciplines only.
On siloed citations: CS teachers would often only assign CS authors, HSS teachers would often only assign HSS authors. "The same case study might be used in two courses, with completely different conclusions." (!!!)
@rajiinio: "AI ethics is inherently interdisciplinary" ... "Contrary to popular belief, CS can't solve the ethics crisis on its own. Exclusionary behavior results in a loss of values, assumptions, and methods that make the field unable to address its problems."
@rajiinio: "The expansion of the AI field is required to solve the AI ethics problem, but this exclusionary behavior narrows the lens of the field in a way that actually prohibits it from being able to think critically about how to address its problems."
They look to a shift in the climate change literature as inspiration, about how it originally looked only at greenhouse gases, but expanded to look at the social impacts and challenges, driven by students, to tackle climate change.
One possible approach to do this shift: demonstrate the positive outcomes of collaborative pedagogy. Allow students to interact with those from other departments, work on projects together, encourage cross-disciplinary collaborations.
Another approach: "educate students on frameworks of interventions based around existing problems, not anchored on the skills of those assumed to be in the position to address the problems" (aka present different skillsets)
Third approach: Discuss the various stakeholders when exploring the curriculum for AI ethics--make it clear that different stakeholders exist, beyond technologists. (This is reminiscent of the use of Persona Cards from an earlier talk, to get students to think about those POVs.)
Fourth approach: Work directly with the populations affected.
Fifth approach: Assess their own disciplinary limitations & where their disciplines can't solve the problems at hand, to encourage bringing in methods and approaches from other disciplines.
Q for Deb on who's teaching this work (CS ethics). Deb mentions a paper from @cfiesler specifically on looking at that. @rajiinio mentions self-critical approaches that should be used by CS professors to critique their own discipline, even if the course is taught from 1 POV.
@rajiinio mentions again about the climate science field, where much of the work early on was looking at the "technical" side of greenhouse gases, but soon realized that the economics, social aspects were needed. We can take inspiration from that with teaching CS ethics.
A through-line in this work is that we have to emphasize the shortcomings of our discipline, whether it's in being self-critical when teaching, or acknowledging that our sole discipline's methods may not be able to solve the problem at hand. That's how we motivate collaboration.
Related, I wanted to plug in a thread here that Deb & many others have been commenting on, about how even within our community there's a need for a FAccT dictionary and cross-discipline translation guide. Thinking about that re: language for collaboration
Also, re: incorporating the perspectives of different stakeholders into the classroom, this is something the first AI education paper did with "Persona Cards". See this thread for a lil more on that approach (and check out their paper!):
Q for @rajiinio on advice to educators on teaching AI case studies that have harms for underrepresented students that could be traumatic. She points to the work of @DrDesmondPatton and the SAFE Lab at Columbia as an example of folks who have done this teaching well.
I've got *lots* of thoughts on this, especially with creating classrooms environments that foster safety for the students & acknowledge the students as full people & creates space to have these sorts of dialogues inclusively. Rooted in teaching w/ justice & empathy in mind.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Excited for this final keynote! For those outside of the know, Julia Angwin was the journalist who broke the "Machine Bias" article with ProPublica that just about everyone in this field now cites. She also founded The Markup & is the EIC there. Her work has been field-changing.
@JuliaAngwin is talking about how The Markup does things differently, emphasizing building trust with the readers. By writing stories and showing their analysis work, but also through a privacy promise, not tracking *anything* about people who visit their website. No cookies!
@JuliaAngwin: "We don't participate in the game that is pretty common in Silicon Valley .... we don't think someone who gets paid to be a spokesperson for an organization deserves the cloak of anonymity. That's what we do differently from other journalists they might talk to."
On the last-minute changing of the name: "Rather than say the ways that we would like to deviate from the inevitable, we want to talk about the ways in which the implications of the future are up for grabs." - @alixtrot 🔥🔥
.@schock tells us to "put our money where our mouth is" and sign up for and support the Turkopticon organizing effort to help support Amazon Mechanical Turk workers:
.@cori_crider talks about Prop 22 here in CA, which companies like Uber spent $200M on in order to encode into law that drivers are not employees. "Having secured that victory, they're seeking to roll out that model in other legislatures." "That is Uber's vision of the future."
This is one of my favorite papers at #FAccT21 for sure, and I highly recommend folks watch the talk and read the paper if they can! Tons of nuggets of insight, was so busy taking notes that I couldn't live-tweet it. Here are some take-aways, though:
The paper looked at racial categories in computer vision, motivated by looking at some of the applications of computer vision today.
For instance, face recognition is deployed by law enforcement. One study found that these "mistook darker-skinned women for men 31% of the time."
They ask, how do we even classify people by race? If this is done just by looking at geographical region, Zaid Khan argues this is badly defined, as these regions are defined by colonial empires and "a long history of shifting imperial borders". 🔥🔥
First paper of session 22 at #FAccT21 is on "Bias in Generative Art" with Ramya Srinivasan. Looks at AI systems that try to generate art based on specific historical artists' styles, but using causal methods, analyzes the biases that exist in the art generation.
They note: It's not just racial bias that emerges, but also bias that stereotypes the artists' styles (e.g., reduction of their styles to use of color) which doesn't reflect their true cognitive abilities. Can hinder cultural preservation and historical understanding.
Their study looks at AI models that generate art mainly in the style of Renaissance artists, with only one non-Western artist (Ukiyo-e) included. Why, you might ask?
There are "no established state-of-the-art models that study non-Western art other than Ukiyo-e"!!
Happening now: the book launch of "Your Computer is on Fire", which is an anthology of essays on technology and inequity, marginalization, and bias.
@tsmullaney with opening remarks on how this *four and a half* year journey has been an incredibly personal one.
I can't believe it's been four years!! I remember attending the early Stanford conferences that led to the completion of this book. At the time I think I was just returning from NYC to Oakland... so much has changed since then, in the world & this field, truly.
@histoftech: "As Sarah Roberts (@ubiquity75 ) shows in her chapter in this book, the fiction that platforms that are our main arbiters of information are also somehow neutral has effectively destroyed the public commons"
Last talk for this #FAccT21 session is "Towards Cross-Lingual Generalization of Translation Gender Bias" with Won Ik Cho, Jiwon Kim, Jaeyoung Yang, Nam Soo Kim.
Remember the Google translate case study that added sexist gender pronouns when translating? This is about that.
Languages like Turkish, Korean, Japanese, etc. use gender-neutral pronouns, but when translating to languages like English, often use gender-specific pronouns. But also, languages like Spanish and French, have gendered *expressions* as well to keep in mind.
This matters because existing translation systems could contain biases that could generate translated results that are offensive and stereotypical, and not always accurate.
Note that not all languages have colloquially used gender neutral pronouns (like the English "they").