Today, technical experts hold the tools to conduct system-scale algorithm audits, so they largely decide what algorithmic harms are surfaced. Our #cscw2022 paper asks: how could *everyday users* explore where a system disagrees with their perspectives? hci.st/end-user-audit 🧵
(2/6) User-led audits at this scale are challenging: just to get started, they require substantial user effort to label and make sense of thousands of system outputs. Could users label just 20 examples and jump to the valuable part of providing their unique perspectives?