Deb Raji Profile picture
9 Nov, 4 tweets, 2 min read
It seriously annoys me how much tech cos and those representing them are valued in tech policy conversations.

It speaks to the prevalence of the false "prodigal tech bro" narrative that only those who participated in creating the harm, once reformed, will be able to stop it.
Also undermines the value of those with non-tech co experience. I want to see those that have been fighting to protect people for year to be the top candidates considered.

Regulation is supposed to be our way of forcing companies to work against their own interests for the sake of justice. Instead, we keep getting policy that works to protect the tech and the profits it brings in, rather than prioritizing the well-being of people. This is why! 😒
For those unfamiliar with this:

"Prodigal tech bro stories skip straight from the past, when they were part of something that—surprise!—turned out to be bad, to the present, where they are now a moral authority on how to do good..." - @mariafarrell

conversationalist.org/2020/03/05/the…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Deb Raji

Deb Raji Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @rajiinio

13 Aug
This is actually unbelievable. In the UK, students couldn't take A-level exams due to the pandemic, so scores were automatically determined by an algorithm.

As a result, most of the As this year - way more than usual - were given to students at private/independent schools. 😩
Looks like @ICOnews has some guidance for how students can access information about their scores and contest results. h/t @mikarv for bringing this into my timeline.

This happened to International Baccalaureate (IB) scores earlier this year - in the US & abroad, students could not take exams + had their scores assigned by an algorithm.

This algo yielded unexpected results, jeopardizing student admissions to college.

wired.com/story/algorith…
Read 12 tweets
26 Jul
Sometimes technology hurts people precisely because it *doesn't* work & sometimes it hurts people because it *does* work.

Facial recognition is both. When it *doesn't* work, people get misidentified, locked out, etc. But even when it *does*, it's invasive & still unsafe.
I think there's something specifically disturbing about the fact that there are deployed technologies of any sort that are not built for or tested on black people (or any other minority population). That puts these populations at risk & is a problem worth addressing specifically.
That being said - for technologies that are problematic even when they *do* work, such as facial recognition, the goal of auditing for fairness is to open the dialogue, and lead to the more important conversation of the other risks and concerns that ultimately invalidate its use.
Read 8 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!