Me, listening to the first live-panel from #FAccT21 and as I watch all of the paper talks and Q&A's:
So many good (shocking?!) learnings from this first live-panel. Will tweet more about today's panels (only lightly though) & keep them within this thread! Already so much good content, I can't keep up with both note-taking and tweeting 😂
Related, I'm so thankful for the live captioning and transcription service that FAccT is using this year! As someone who is hard-of-hearing and neurodivergent, being able to read back on the transcription makes *such* a difference.
Dr. Seda Gürses: "There's no COVID story without technology ... COVID was used as an excuse to increase the acts and spaces that are policed using digital technology ... Our computational infrastructures exposed the social order they had been envisioning"
SG: Government entities wanted apps for COVID tech, many where privacy was out of the picture. Not sure if they understood this meant negotiating their public health system with two companies -- Google and Apple. Has to align with the bottom line of these companies.
Dr. Marzyeh Ghassemi: "Students put together this self-reporting website. It got a lot of traction, so a lot of people used it. We were approached by the Canadian Armed Forces, who asked us for access to the detailed geocoded and time-stamped data."
MG: One of the advisors on the project said they should give up data they collected to the military. MG pushed back, saying, no, "what enforcement do you think is going to happen?" *Who* are the people who do need to break curfew? Visible minorities understand the repercussions.
MG notes her experience from pandemic, that "there's often an assumption from people who have never had a negative interaction with a government that these things are intended to help you. ... I'm not sure they're intended to help me, so I think changing that framing is helpful."
Deb Raji: "Something we saw during COVID was this pivot to increase sort of digitization of different elements of surveillance by virtue of making things frictionless, so digital doormen. That enabled an increased weaponization ... to exclude or track people and criminalize them"
SD relates COVID false promises of "temporary" infrastructures to 9/11's surveillance infrastructures, and how these are opportunities to do population management, like how 9/11 infrastructures were used to massively collect data and surveil folks (as Snowden revealed).
MG mentions tech that can be co-opted for other uses; mentions work from MIT where they look at how Wi-Fi signals bounce off people in unique ways based on biometrics, can be used to track people. Creepy because we all have routers everywhere, in our homes, can't turn them off...
DR mentions the huge influence corps have on policy. "Amazon put a pause on sale of facial recognition to police departments but doubled down on lobbying to Congress. Some folks might underestimate how influential these companies are in terms of shaping different types of policy"
DR mentions the slippery slope of tech promises. Cites Boston Dynamics robot, which they said would never be used to police citizens. But now we see them deployed in the Bronx for just that. DR mentions this is why regulation independent from companies is going to be essential.
DR, on surveillance tech: "We saw in realtime as COVID became a sort of marketing tool to increase their expansion, but also make all these promises. 'Oh, given COVID, we have this positive use case of why you need facial recognition in your building now.'"
Oh no, I'm tweeting too much, which I promised I wouldn't 😂 At some point I guess I switched over to tweeting to take notes. Anyways, this panel had some great discussions on agency, on surveillance, on regulation (or lack thereof), on reliance on Big Tech.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Excited for this final keynote! For those outside of the know, Julia Angwin was the journalist who broke the "Machine Bias" article with ProPublica that just about everyone in this field now cites. She also founded The Markup & is the EIC there. Her work has been field-changing.
@JuliaAngwin is talking about how The Markup does things differently, emphasizing building trust with the readers. By writing stories and showing their analysis work, but also through a privacy promise, not tracking *anything* about people who visit their website. No cookies!
@JuliaAngwin: "We don't participate in the game that is pretty common in Silicon Valley .... we don't think someone who gets paid to be a spokesperson for an organization deserves the cloak of anonymity. That's what we do differently from other journalists they might talk to."
On the last-minute changing of the name: "Rather than say the ways that we would like to deviate from the inevitable, we want to talk about the ways in which the implications of the future are up for grabs." - @alixtrot 🔥🔥
.@schock tells us to "put our money where our mouth is" and sign up for and support the Turkopticon organizing effort to help support Amazon Mechanical Turk workers:
.@cori_crider talks about Prop 22 here in CA, which companies like Uber spent $200M on in order to encode into law that drivers are not employees. "Having secured that victory, they're seeking to roll out that model in other legislatures." "That is Uber's vision of the future."
Let's goooo!!! The second of two papers on AI education is coming up in a bit. As an AI educator focused on inclusion and co-generative pedagogy, I'm *really* excited for this talk on exclusionary pedagogy. Will tweet some take-aways in this thread:
First, a mention for those who don't know, I've been a CS educator since 2013, and in 2017 I moved into specifically being an AI educator, focusing on inclusive, accessible, and culturally responsive high school curriculum, pedagogy, and classroom experiences. Informs my POV
.@rajiinio starts the talk off by mentioning that there's an AI ethics crisis happening & we're seeing more coverage of the harms of AI deployments in the news. This paper asks the question, "Is CS education the answer to the AI ethics crisis, or actually part of the problem?" 🤔
This is one of my favorite papers at #FAccT21 for sure, and I highly recommend folks watch the talk and read the paper if they can! Tons of nuggets of insight, was so busy taking notes that I couldn't live-tweet it. Here are some take-aways, though:
The paper looked at racial categories in computer vision, motivated by looking at some of the applications of computer vision today.
For instance, face recognition is deployed by law enforcement. One study found that these "mistook darker-skinned women for men 31% of the time."
They ask, how do we even classify people by race? If this is done just by looking at geographical region, Zaid Khan argues this is badly defined, as these regions are defined by colonial empires and "a long history of shifting imperial borders". 🔥🔥
First paper of session 22 at #FAccT21 is on "Bias in Generative Art" with Ramya Srinivasan. Looks at AI systems that try to generate art based on specific historical artists' styles, but using causal methods, analyzes the biases that exist in the art generation.
They note: It's not just racial bias that emerges, but also bias that stereotypes the artists' styles (e.g., reduction of their styles to use of color) which doesn't reflect their true cognitive abilities. Can hinder cultural preservation and historical understanding.
Their study looks at AI models that generate art mainly in the style of Renaissance artists, with only one non-Western artist (Ukiyo-e) included. Why, you might ask?
There are "no established state-of-the-art models that study non-Western art other than Ukiyo-e"!!
Happening now: the book launch of "Your Computer is on Fire", which is an anthology of essays on technology and inequity, marginalization, and bias.
@tsmullaney with opening remarks on how this *four and a half* year journey has been an incredibly personal one.
I can't believe it's been four years!! I remember attending the early Stanford conferences that led to the completion of this book. At the time I think I was just returning from NYC to Oakland... so much has changed since then, in the world & this field, truly.
@histoftech: "As Sarah Roberts (@ubiquity75 ) shows in her chapter in this book, the fiction that platforms that are our main arbiters of information are also somehow neutral has effectively destroyed the public commons"