NYPD Commissioner O’Neill's op-ed for @PrivacyProject side-steps or ignores many of the problems with facial recognition. Let’s be clear about things: nytimes.com/2019/06/09/opi…
NYPD says their analyses don’t look at race, gender, or ethnicity. But that isn’t the problem. Multiple tests of facial recognition systems show they can't reliably identify women & people with darker skin tones. 1 in 2 New Yorkers are women, & over 50% of NYC is Black or Latinx.
That means facial recognition doesn't reliably work on the average New Yorker even under the best circumstances. And the truth is, NYPD’s use of this technology falls way short of that…
Commissioner O'Neill says NYPD doesn't run police sketches through facial recognition because they "would be of no value." We agree. But using photo editing software to create digital collages is virtually the same thing.
Cutting out a photo of a person’s lips and pasting them on top of someone else’s face is asking facial recognition systems to interpret art projects. Equally useless and unaddressed in the op-ed: using celebrity photos of Woody Harrelson or J.R. Smith to find suspects.
Commissioner O'Neill doesn't discuss what kind of confidence thresholds are acceptable for the NYPD. Just because the system returns its best guess doesn't mean the potential match is reliable.
NYPD says their system only analyzes pictures against its database of arrest photos. But biased policing programs like stop-and-frisk make it likely that most of the faces in this database will be black and brown. The result? A feedback loop of over-policing communities of color.
We're laying the groundwork for an unaccountable digital stop & frisk program. In 2018, facial recognition led to 998 arrests. What racial groups are most impacted? Is it used to enforce low-level crimes like turnstile jumping?How often were people stopped & questioned?
One thing we can agree on: the public should know how NYPD uses facial recognition and what safeguards it has in place.
But that information shouldn't be in a NYT op-ed responding to public criticism, it should be made available to the public and @NYCCouncil.
The NYPD should make common-sense transparency disclosures about every surveillance tool it uses. Police around the country already do this, it's time for NYC to catch up and pass the #POSTAct.
After the death of a local teen, grieving classmates wore lanyards, said his name, & filmed music videos. NYPD labeled them a gang.
Today, 31 organizations and academics call on the NYPD Inspector General to audit the NYPD's gang database. brennancenter.org/our-work/resea…
We believe the gang database’s vague and subjective standards make it unreliable as an investigative tool and result in ongoing discrimination against Black and Latinx New Yorkers. slate.com/technology/202…
The racial bias of the gang database is uncontested: NYPD testified it is 97.7% Black or Latino.
Under the guise of gang policing, the NYPD is continuing the same discriminatory policing that fueled their illegal stop-and-frisk program. theintercept.com/2019/06/28/nyp…
The basics: ALPRs use cameras and software to scan the plates of every car that passes by. They can log the time and date, GPS coordinates, and pictures of the car. Some versions can even snap pictures of a car’s occupants and create unique vehicle IDs. theintercept.com/2019/07/09/sur…
In 1 week, the LAPD scanned more than 320 mil plates. Private companies like Vigilant Solutions sell cops (and ICE) access to their private database of billions of scans, while Flock Safety sells ALPRs to paranoid homeowners and lets them share with police cnet.com/news/license-p…
THREAD: I analyzed Citizen's contact tracing app when they were pitching it to NYC. Unsurprisingly, its approach to privacy is terrible, continues to encourage paranoia-as-a-service, and has wide latitude for law enforcement access.
This app collects A LOT of personal information, including location data, copies of gov-ID, COVID-19 diagnosis information, and undefined “health information.” They only commit to deleting Bluetooth data & gov-id in 30 days. Nothing else is subject to any regular deletion policy.
Location data is hard to anonymize, but Citizen isn't really interested that. They'll show you a map that makes it easy to re-identify a sick person.
This creates a dangerous opportunity for exposing people’s identities and subjecting them to online/offline harassment.
Great piece, featuring very important points raised by leading thinkers in this space.
I would raise a few more, with a focus on the US and its marginalized communities: slate.com/technology/202…
1) Most GIFCT removals are for "glorification." That can capture a broad swath of content, incl. general sympathies with a group or debate about its grievances.
If that sounds fine, consider your own support for BLM or antifa, and our gov's attempt to label them as terrorists.
2) The closed-door involvement of the US government in the GIFCT is worrying, not comforting.
Consider the FBI's investigation of the fictional Black Identity Extremist movement, and its current interrogation of protestors for connections to antifa. theintercept.com/2020/06/04/fbi…
Twitter has policies that prohibit platform manipulation, violence, terrorism, harassment, and hateful conduct. But today's actions announce a number of ad-hoc decisions that introduce new vaguely defined terms. Why? Here's a short analysis:
There are existing rules against platform manipulation, which covers things like spam, coordinated activity, and multiple accounts. But Twitter made these removals under a new prohibition against "coordinated harmful activity." What does this term mean? What's different?
Thread on DHS' new PIA for expanding the Terrorist Screening Database to include ppl suspected of association w/"Transnational Organization Crime."
Serious concerns w/vague definitions, bad data, & wide info-sharing; Latinos are likely the most at risk. dhs.gov/sites/default/…
Last year, a federal judge ruled that the terrorist screening database violated the rights of Americans that were on the list. Rather than scale back, this PIA covers an expansion to track even more people. Many of the same concerns apply. nytimes.com/2019/09/04/us/…
The PIA acknowledges that this new category goes beyond the initial purpose of the watchlist (terrorism). But because the President said this group of people is ALSO a national security threat, it's fine? 🤷🏽♂️