Algorithms can determine how long someone spends in jail, who has access to schools, and where you vote. @BrennanCenter signed onto a letter with recommendations for how NYC can protect against biased decision-making by city agencies. Here are some highlights #NYCAlgorithms
NYC should publicly disclose its use of algorithms, explain how they work, and allow the public to challenge decisions made by them. Explanations should account for New York's linguistic, socioeconomic, and cultural diversity.
Algorithms must be tested for disparate impact based on protected status. If disparate impact is found, the agency should explain why its use is necessary to achieve an important agency interest, and that there is no less-discriminatory alternative.
The City Council should pass a law allowing New Yorkers impacted by discriminatory algorithms to bring a private right of action.
NYC should establish an online resource of all algorithms on an agency-by-agency basis. This resource should include disclosure of source code so algorithms can be studied for bias and disparate impact.
City contracts with vendors must include provisions requiring them to disclose information for all data-sets used to develop and implement the systems, along with any records of bias testing.
Agencies should perform an Algorithmic Impact Assessment, preferably *before* acquiring or building a new algorithm. Each agency should review existing systems for fairness, justice, bias, civil rights, privacy, and related concerns.
NYC should always consult impacted communities regarding the use of algorithms. The letter also provides a number of subject-matter experts for issues ranging from policing to housing rights, and child welfare to public benefits.
After the death of a local teen, grieving classmates wore lanyards, said his name, & filmed music videos. NYPD labeled them a gang.
Today, 31 organizations and academics call on the NYPD Inspector General to audit the NYPD's gang database. brennancenter.org/our-work/resea…
We believe the gang database’s vague and subjective standards make it unreliable as an investigative tool and result in ongoing discrimination against Black and Latinx New Yorkers. slate.com/technology/202…
The racial bias of the gang database is uncontested: NYPD testified it is 97.7% Black or Latino.
Under the guise of gang policing, the NYPD is continuing the same discriminatory policing that fueled their illegal stop-and-frisk program. theintercept.com/2019/06/28/nyp…
The basics: ALPRs use cameras and software to scan the plates of every car that passes by. They can log the time and date, GPS coordinates, and pictures of the car. Some versions can even snap pictures of a car’s occupants and create unique vehicle IDs. theintercept.com/2019/07/09/sur…
In 1 week, the LAPD scanned more than 320 mil plates. Private companies like Vigilant Solutions sell cops (and ICE) access to their private database of billions of scans, while Flock Safety sells ALPRs to paranoid homeowners and lets them share with police cnet.com/news/license-p…
THREAD: I analyzed Citizen's contact tracing app when they were pitching it to NYC. Unsurprisingly, its approach to privacy is terrible, continues to encourage paranoia-as-a-service, and has wide latitude for law enforcement access.
This app collects A LOT of personal information, including location data, copies of gov-ID, COVID-19 diagnosis information, and undefined “health information.” They only commit to deleting Bluetooth data & gov-id in 30 days. Nothing else is subject to any regular deletion policy.
Location data is hard to anonymize, but Citizen isn't really interested that. They'll show you a map that makes it easy to re-identify a sick person.
This creates a dangerous opportunity for exposing people’s identities and subjecting them to online/offline harassment.
Great piece, featuring very important points raised by leading thinkers in this space.
I would raise a few more, with a focus on the US and its marginalized communities: slate.com/technology/202…
1) Most GIFCT removals are for "glorification." That can capture a broad swath of content, incl. general sympathies with a group or debate about its grievances.
If that sounds fine, consider your own support for BLM or antifa, and our gov's attempt to label them as terrorists.
2) The closed-door involvement of the US government in the GIFCT is worrying, not comforting.
Consider the FBI's investigation of the fictional Black Identity Extremist movement, and its current interrogation of protestors for connections to antifa. theintercept.com/2020/06/04/fbi…
Twitter has policies that prohibit platform manipulation, violence, terrorism, harassment, and hateful conduct. But today's actions announce a number of ad-hoc decisions that introduce new vaguely defined terms. Why? Here's a short analysis:
There are existing rules against platform manipulation, which covers things like spam, coordinated activity, and multiple accounts. But Twitter made these removals under a new prohibition against "coordinated harmful activity." What does this term mean? What's different?
Thread on DHS' new PIA for expanding the Terrorist Screening Database to include ppl suspected of association w/"Transnational Organization Crime."
Serious concerns w/vague definitions, bad data, & wide info-sharing; Latinos are likely the most at risk. dhs.gov/sites/default/…
Last year, a federal judge ruled that the terrorist screening database violated the rights of Americans that were on the list. Rather than scale back, this PIA covers an expansion to track even more people. Many of the same concerns apply. nytimes.com/2019/09/04/us/…
The PIA acknowledges that this new category goes beyond the initial purpose of the watchlist (terrorism). But because the President said this group of people is ALSO a national security threat, it's fine? 🤷🏽♂️