Frontiers in Human Robot Interaction Research, as imagined by me, a disabled researcher who's tired of your shit. 🧵
1) A socially assistive robot for autistic people but instead of "correcting" autistic "social deficits" it absolutely roasts other people about their own ableism.
2) A reconfigured amazon alexa that mounts to a power chair that yells at people when they do something rude. "Do not touch!", "Talk to me, not my PCA.", "Bark Bark!", "*cicada noises*"
3) I have had this idea for an #altCHI piece where I bring a robot with me and it "corrects" my autistic traits while I present and I had this whole delusion that people would see how dystopic it was but then I realized they would think it was "working" so... that's horrifying.
4) a "classroom behavior management" intervention delivered on the audience of a #HCI conference. They get to see their nametags go to the "red zone" for tapping their legs or clicking their pens or scrolling their phones.
5) A robot that follows doctors around and any time they complain about anything in the privacy of their own homes it just goes "have you tried getting more exercise?"
6) a robotic goose that chases people while honking for telling inspirational stories about that time they offered unsolicited assistance to a disabled person
7) the goose again but it drops out if the sky screeching anytime you ask an amputee what happened to them
8) just the goose. All disabled people want is a robotic goose assistant that honks at assholes.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
What did happen was that repeatedly throughout my graduate studies, every time I wrote research protocols that engaged with autistic people (adults and children) directly, I had to justify how they were competent to consent.
That's right. I had to prove that autistic people are capable of understanding research and of having the agency to consent to participation. There's no two ways about it. This is institutionalized, systemic ableism.
Our paper "Oh No, Not Another Trolley!" has been accepted to @IEEESSIT. We survey CS majors about their exposure to ethics in CS courses and their ethical reasoning in 5 scenarios from real-world examples of algorithmic decision-making support in healthcare & #COVID19. 🧵
While many students were able to articulate potential threats to equity and mortality for people marginalized by racial, gender, and class oppression, none concurrently recognized disabled or chronicaly ill people as a specific class vulnerable to systemic bias.
(additionally, students that recognized racial discrimination in algorithmic decision-making did not recognize how ableism strengthens racism.)
Hello everyone! Now would be a really... Apropos time to look up Aktion T4 and learn about this programme of the Nazi regime.
Why?
Because it's happening again.
Thread
Even before T4 was formalized, they were testing the public tolerance for mass murder by playing shell games with disabled people.
Disabled children, elders, and "hysteric" relatives were sent to congregate care settings. Local drs encouraged families "it's for the best".
For all sorts of reasons. Hospitals for the disabled could provide "better treatment" (even lying and saying they could recover and come home "healthy"). That a productive German family should not be burdened with such care. That it was a civic duty.
Also excited to learn that the highlights I wrote while slightly irritated and extremely tired are the real actual words people see when they look up the article lmao
1) ~90% of technological interventions collected for review constitute “normalizing technologies” that view autistic traits as deficits to overwrite with neurotypical behavior
When we work with marginalized groups as co-informants, co-designers, and co-researchers, we have to be aware of internalized oppression in ourselves and our participants.
Internalized Oppression can result in our participants expressing desires that align with their systemic oppression.
This doesn't mean that those desires are invalid! BUT