#NYPD’s surveillance arsenal is an accountability black hole, with a growing list of scandals ranging from running celebrity headshots through facial recognition to secretly collecting DNA samples from kids.
Is this who we want running @NYCDoITT? The article praises everything from the Domain Awareness System to predictive policing to body cameras without spending a word acknowledging the problems associated with these tools or NYPD's commitment to secrecy.
So here are a few. /2
The Domain Awareness System - a massive network of surveillance cameras, license plate readers, sensors, databases, and more - grew out of a partnership with Microsoft. Whenever they sell DAS to another city, NYC gets a 30% cut. /3 fastcompany.com/3000272/nypd-m…
A similar partnership with IBM used surveillance footage to train the company's video analytics system, which claimed it could spot people based on skin color. /4
For years, we've been suing NYPD to learn about their predictive policing systems. We learned they designed it to not store the algorithm's outputs for hotspots, making it difficult to evaluate effectiveness or disparate impact. /5 thedailybeast.com/red-flags-as-n…
The NYPD's body cam rollout did little to improve police accountability. But we recently learned that NYPD runs body camera footage through facial recognition to make ID's. /6 gothamgazette.com/city/8935-nypd…
We recently published a chart tracking all of NYPD's surveillance tools. Many of these tools are actively invading the civil rights and civil liberties of New Yorkers, but there is virtually zero public oversight or accountability. /7 brennancenter.org/our-work/resea…
On December 18 at 1 PM, the @NYCCouncil will hold a hearing on the #POSTAct, which would require common sense disclosures that were absent during Ms. Tisch stint as NYPD's Deputy Commissioner of Information Tech. brennancenter.org/our-work/resea…
After the death of a local teen, grieving classmates wore lanyards, said his name, & filmed music videos. NYPD labeled them a gang.
Today, 31 organizations and academics call on the NYPD Inspector General to audit the NYPD's gang database. brennancenter.org/our-work/resea…
We believe the gang database’s vague and subjective standards make it unreliable as an investigative tool and result in ongoing discrimination against Black and Latinx New Yorkers. slate.com/technology/202…
The racial bias of the gang database is uncontested: NYPD testified it is 97.7% Black or Latino.
Under the guise of gang policing, the NYPD is continuing the same discriminatory policing that fueled their illegal stop-and-frisk program. theintercept.com/2019/06/28/nyp…
The basics: ALPRs use cameras and software to scan the plates of every car that passes by. They can log the time and date, GPS coordinates, and pictures of the car. Some versions can even snap pictures of a car’s occupants and create unique vehicle IDs. theintercept.com/2019/07/09/sur…
In 1 week, the LAPD scanned more than 320 mil plates. Private companies like Vigilant Solutions sell cops (and ICE) access to their private database of billions of scans, while Flock Safety sells ALPRs to paranoid homeowners and lets them share with police cnet.com/news/license-p…
THREAD: I analyzed Citizen's contact tracing app when they were pitching it to NYC. Unsurprisingly, its approach to privacy is terrible, continues to encourage paranoia-as-a-service, and has wide latitude for law enforcement access.
This app collects A LOT of personal information, including location data, copies of gov-ID, COVID-19 diagnosis information, and undefined “health information.” They only commit to deleting Bluetooth data & gov-id in 30 days. Nothing else is subject to any regular deletion policy.
Location data is hard to anonymize, but Citizen isn't really interested that. They'll show you a map that makes it easy to re-identify a sick person.
This creates a dangerous opportunity for exposing people’s identities and subjecting them to online/offline harassment.
Great piece, featuring very important points raised by leading thinkers in this space.
I would raise a few more, with a focus on the US and its marginalized communities: slate.com/technology/202…
1) Most GIFCT removals are for "glorification." That can capture a broad swath of content, incl. general sympathies with a group or debate about its grievances.
If that sounds fine, consider your own support for BLM or antifa, and our gov's attempt to label them as terrorists.
2) The closed-door involvement of the US government in the GIFCT is worrying, not comforting.
Consider the FBI's investigation of the fictional Black Identity Extremist movement, and its current interrogation of protestors for connections to antifa. theintercept.com/2020/06/04/fbi…
Twitter has policies that prohibit platform manipulation, violence, terrorism, harassment, and hateful conduct. But today's actions announce a number of ad-hoc decisions that introduce new vaguely defined terms. Why? Here's a short analysis:
There are existing rules against platform manipulation, which covers things like spam, coordinated activity, and multiple accounts. But Twitter made these removals under a new prohibition against "coordinated harmful activity." What does this term mean? What's different?
Thread on DHS' new PIA for expanding the Terrorist Screening Database to include ppl suspected of association w/"Transnational Organization Crime."
Serious concerns w/vague definitions, bad data, & wide info-sharing; Latinos are likely the most at risk. dhs.gov/sites/default/…
Last year, a federal judge ruled that the terrorist screening database violated the rights of Americans that were on the list. Rather than scale back, this PIA covers an expansion to track even more people. Many of the same concerns apply. nytimes.com/2019/09/04/us/…
The PIA acknowledges that this new category goes beyond the initial purpose of the watchlist (terrorism). But because the President said this group of people is ALSO a national security threat, it's fine? 🤷🏽♂️