Michelle Lam Profile picture
CS PhD student @Stanford | hci, social computing, human-centered AI, algorithmic fairness (+ dance, design, doodling!) | she/her
Oct 24, 2022 6 tweets 4 min read
Today, technical experts hold the tools to conduct system-scale algorithm audits, so they largely decide what algorithmic harms are surfaced. Our #cscw2022 paper asks: how could *everyday users* explore where a system disagrees with their perspectives? hci.st/end-user-audit 🧵 End-User Audits: System-scale algorithm audits led by indivi (2/6) User-led audits at this scale are challenging: just to get started, they require substantial user effort to label and make sense of thousands of system outputs. Could users label just 20 examples and jump to the valuable part of providing their unique perspectives?