Next up excited for @KLdivergence with a risk assessment tool analysis on “overbooking”. This is when someone is arrested on charges that are more serious than they should be. First off: there’s very little accountability so this is gonna be a hard problem to address. #FAT2020
Guess what? There is a lot of racial disparity in this. I’ll say... But again, this is hard to address because there’s no real “ground truth”. It’s complicated and we can attack the issue by looking at actual convictions vs. accusations.
So let’s look at the pre-trial risk assessment algos. One in particular in the USA is made of many models that output a score. Green = freedom. Red = you go to jail.
And, there are exceptions:
Also, exceptions the other way:
So let’s audit this mfer! Ran an experiment twice, that way we’re able to see if the predictions were correct, years later. Was this person actually convicted? Well... in 27% of the cases, people were accused of much greater crimes than they were convicted of, in the end.
Wow. That’s one in four. One in four people are arrested on charges that aren’t supported by enough facts to convict. We should spend more time working with judges so they are aware of this. YES!
• • •
Missing some Tweet in this thread? You can try to
force a refresh
All the privacy news on Sidewalk says it’ll drain your battery, spend your data plan, expose you to hacking threats, all of this is true. But it misses the forest for the trees. Amazon in possession of a nationwide network gives them the power of a utility. With less oversight.
People sometimes forget that the original tech villains were the telecoms, and rightfully so. Sidewalk can’t wiretap your conversations* like the telecoms can, but that’s not the only way police have used them to put innocent people in jail. newyorker.com/news/news-desk…
It wasn’t until 2018 that the Supreme Court said police need a warrant to access cell site location records. Will this decision apply to WiFi probes? Maybe? It was a narrow decision with Roberts as tiebreaker. Will it stop them from trying? Who knows. nytimes.com/2018/06/22/us/…
Kinda excited and kinda sad that in my short career as a tech activist, I can now, for the first time, throw up my hands and say “I’ve been talking about this for years!” 🤦🏻♀️
One of my most popular tweets, in fact, was about this. I don’t love sharing it bc some mild factual errors appear due to haste and lack of available info at the time, but for posterity, here it is. Every now and then it resurfaces in my mentions bc (surprise!) people DO care.
Did you know that Amazon Sidewalk can track you, even if you’re not logged on?
Last year, people called me paranoid for worrying about probe requests and Amazon’s mesh network, but here we are a year later and location tracking is now more clearly at the heart of the operation.
The more devices they sell into the world, the more complete a picture they’ll have of what’s in it; what we do, where we go, how we spend our time. Sidewalks talk to each other. And likely to all WiFi devices, with or w/o Sidewalk, in subtle ways. theverge.com/2020/9/21/2144…
Most consumer tracking efforts through today have focused on the individual and their device. Amazon, however, seems to want to turn the physical world into an Amazon Go store with their sensors, microphones, and cameras aimed at everything we do. There is no opting out.
🚨🚨This should feel like winning but it feels more like a thinly veiled threat. @BradSmi says MSFT won’t sell FRT without federal law governing use. IN FACT, Microsoft has WRITTEN such a law, and their lobbyists are pushing it from state to state.🚨🚨 /1 thehill.com/policy/technol…
SURPRISE: their law sucks. Policy experts agree that, while there is benefit to requiring a warrant for the use of FRT, the MSFT-backed law has giant loopholes, allowing a vague definition of “emergency” to allow for continued rampant use of FRT unsupervised by the courts. /2
Microsoft and Amazon have BOTH pressured congress to unify privacy standards country-wide, because they want to undo the CCPA and local FRT bans in cities like SF, Oakland, and others proposed in NYC, Boston... the list goes on /3
1. In large scale automation cases like this one, you better be damn well sure your classifiers aren’t biased. Since this is an impossible task, (all models are biased) AT LEAST retain meaningful human oversight of the task at hand.
2. Don’t put robots in charge of news, end of sentence. Besides what happened here which is just normal ordinary AI racism, models can’t really detect propaganda, sarcasm, or intent... for all our sakes, don’t put robots in charge of news (PLEASE)