Profile picture
Crypti-Calli @Iwillleavenow
, 20 tweets, 5 min read Read on Twitter
Morning, folks! There's been quite a bit of recent talk about facial recognition and I thought it might be helpful to do a thread laying out why some people (THIS GIRL) think it needs regulations, like, yesterday and what the issues are.
So, facial recognition has been a common thing in sci-fi for years. In concept, it is the ultimate crime-solving tool, ultimate security, identifies bad actors, etc. (unless you have one of those Mission Impossible masks).
In practice, facial recognition has been in development and in use for years at various levels. It's how Facebook guesses who should be tagged in photos you upload, it's used to unlock phones, it can be used to purchase goods in some parts of China. Amazon is an early developer.
It's touted as a phenomenal way to identify bad actors, to find criminals and suspects, to protect people from known threats and stalkers (as was the justification in the...contentious use of facial recognition recently at a Taylor Swift concert). theguardian.com/music/2018/dec…
HOWEVER, facial recognition is notoriously unreliable. Amazon's version incorrectly linked 28 members of Congress to mugshots when tested.
theverge.com/2018/7/26/1761…
These mistakes are not neutral. These systems are famously bad across races, frequently misidentifying African American and other darker-skinned individuals and misidentifying women. gizmodo.com/can-we-make-no…; theverge.com/2018/2/11/1700…
Since this technology is being heavily touted as for policing and security...and it has problems identifying dark-skinned individuals correctly...and our policing system is already, shall we say...suspect* in this area (*by "suspect" I mean "deeply dysfunctional and biased")...
You can see why there could be some major issues with the technology used to identify suspects and criminals having a high rate of misidentification for the population group already most at risk of law enforcement abuse, prejudice, and injustice.
Let's say it works, though. Let's say facial recognition tech advances to a point where it correctly identifies people, say, 99% of the time.

Here is where I get to dive into the privacy nightmare.
Facial recognition in public places is a surveillance and privacy nightmare. If I go to a public park now, I anticipate that the other people at the park will see my face. They might even recognize me, if I go there often.
I tend not to anticipate that the other people who see my face will be able to instantly cross-reference that with every online account I have, my purchasing habits, music taste, family history, address, etc. That level of being "known" by strangers is far beyond my expectations.
In online settings, I can make adjustments for this. I can change my account settings, make new passwords, do things in incognito mode, use a VPN, etc. But this is real life. And this is my face. I can't VPN my face or change it or even necessarily know when I'm being identified.
Biometric information, like your face, vocal tone, health, and now individual olfactory information (thanks, California?), is considered sensitive partly because it is not something that we can adjust. My face is my face (without costly surgery). Once it's known, it's known.
This is disturbing when it comes to mass use of this unchangeable information for criminal use - even if I actually AM wanted for a crime, I still have rights and expectations of privacy. There are usually warrants and fourth amendment issues before getting sensitive information.
Beyond that, how do I opt out of facial recognition in marketing? How do I prevent stalkers from setting up online alerts for photos with my facial perimeters - which could give them real-time information of where I am? How do I live in the world on facial safe-mode?
This area has enormous potential - I am not necessarily trying to eliminate the tech. But it has been developed largely without regulations or oversight. We haven't forced the debates about how it interacts with warrants and search and privacy laws.
We haven't discussed datasets and eliminating bias and mitigating risk. No establishing consent and disclosure rules. No accounting for who gets to use it and all the ways it can go wrong and who bears the burden and risk of that.
What I (and other "tech-paranoid" people like me) am pushing for is that we drag this tech that has been able to operate without engaging these questions for some time now into the public sphere and force them to address these issues with us. If they don't - shut it down.
Companies that refuse to engage with concerns and criticisms and will not allow transparency about new developments in tech with the people who will be affected by them do not deserve to operate in this space. It's time we had this conversation.
Addition: A tale of two facial recognition technology approaches.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Crypti-Calli
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!