Ángel Díaz Profile picture
Apr 30, 2019 17 tweets 3 min read Read on X
I'll be sharing a few thoughts from NYC's automated decision systems task force's first public forum. #NYCAlgorithms

Livestream here: nyls.mediasite.com/mediasite/Cata…
Co-Chair Kelly Jin notes that people from Australia are tuning in to the live stream.

This shows the extent to which communities around the world are looking to NYC to be a leader in algorithmic transparency and accountability. There's A LOT of work to do to meet that goal...
Futurist Andrew Nicklin recommends that NYC should impose requirements onto vendors to share the load of algorithmic accountability/transparency. YES. the city has lots of negotiation leverage, and can require all kinds of ongoing audits.
Nicely phrased: we vet elected officials responsible for making decisions about public welfare; we should have the ability to vet machines doing the same thing.
Sarah Kaufman, Associate Director, New York University Rudin Center for Transportation lays out the extent to which public movements across the city are tracked, logged, and shared. This has real implications for the liberties of New Yorkers
Case in point: license plate readers sharing their huge repository of data and movements with ICE.
Self-driving vehicles, like facial recognition systems generally, perform poorly when analyzing communities of color. Without oversight, Kaufman rightfully says we're inviting yet another risk: car accidents.
Janai Nelson from the NAACP LDF calls on the task force to recommend full transparency of the NYPD's use of ADS. The potential impacts on communities of color, combined with the department's history of unconstitutional policing makes any carveouts for NYPD unacceptable.👏🏽👏🏽👏🏽
Publicly identify, categorize, and share a list of the NYPD's use of surveillance technology. This list should be continuously updated.

The #POSTAct requires this!
Nelson walking through many of the points made in Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice: papers.ssrn.com/sol3/papers.cf…
Nelson gave a brief but damning overview of NYPD's unconstitutional and discriminatory practices. Any ADS that relies on this historical crime data will be irreparably tainted.
NAACP LDF calls for an outright ban on any ADS that relies on historical NYPD crime data.
No sugarcoating: ADS threatens to completely redefine everything ranging from reasonable suspicion to freedom of speech.
Task force member @mer__edith asking about how to evaluate systems for bias before they're deployed on public data. This is a real concern, and the panelists aren't convinced existing audit oversight mechanisms can do the job.
An ongoing topic of conversation: impacted individuals. There are great advocates here, but making this forum at 6PM on a weeknight inside a Tribeca law school is not the way to meaningfully engage the public at large.
The topic of ADS doesn't need to be esoteric or confusing. Any member of the public can understand why its scary to have computers making decisions about their education, housing, and freedom.
Another good question: who should determine when a data set is too tainted to use in an ADS? Harris recommends it should be a 3rd party such as a city’s CTO or Chief Analytics Officer. This could put a lot of power in a Mayor’s hands...

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ángel Díaz

Ángel Díaz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @AngelSDiaz_

Sep 22, 2020
After the death of a local teen, grieving classmates wore lanyards, said his name, & filmed music videos. NYPD labeled them a gang.

Today, 31 organizations and academics call on the NYPD Inspector General to audit the NYPD's gang database. brennancenter.org/our-work/resea…
We believe the gang database’s vague and subjective standards make it unreliable as an investigative tool and result in ongoing discrimination against Black and Latinx New Yorkers. slate.com/technology/202…
The racial bias of the gang database is uncontested: NYPD testified it is 97.7% Black or Latino.

Under the guise of gang policing, the NYPD is continuing the same discriminatory policing that fueled their illegal stop-and-frisk program. theintercept.com/2019/06/28/nyp…
Read 10 tweets
Sep 10, 2020
This summer, a Black family was arrested and held face-down in asphalt after an ALPR thought their SUV was a stolen motorcycle from another state.

A new @BrennanCenter report analyzes the legal and policy landscape of this pervasive surveillance tool. brennancenter.org/our-work/resea…
The basics: ALPRs use cameras and software to scan the plates of every car that passes by. They can log the time and date, GPS coordinates, and pictures of the car. Some versions can even snap pictures of a car’s occupants and create unique vehicle IDs. theintercept.com/2019/07/09/sur…
In 1 week, the LAPD scanned more than 320 mil plates. Private companies like Vigilant Solutions sell cops (and ICE) access to their private database of billions of scans, while Flock Safety sells ALPRs to paranoid homeowners and lets them share with police cnet.com/news/license-p…
Read 11 tweets
Sep 10, 2020
THREAD: I analyzed Citizen's contact tracing app when they were pitching it to NYC. Unsurprisingly, its approach to privacy is terrible, continues to encourage paranoia-as-a-service, and has wide latitude for law enforcement access.
This app collects A LOT of personal information, including location data, copies of gov-ID, COVID-19 diagnosis information, and undefined “health information.” They only commit to deleting Bluetooth data & gov-id in 30 days. Nothing else is subject to any regular deletion policy.
Location data is hard to anonymize, but Citizen isn't really interested that. They'll show you a map that makes it easy to re-identify a sick person.

This creates a dangerous opportunity for exposing people’s identities and subjecting them to online/offline harassment. Image
Read 10 tweets
Aug 13, 2020
Great piece, featuring very important points raised by leading thinkers in this space.

I would raise a few more, with a focus on the US and its marginalized communities: slate.com/technology/202…
1) Most GIFCT removals are for "glorification." That can capture a broad swath of content, incl. general sympathies with a group or debate about its grievances.

If that sounds fine, consider your own support for BLM or antifa, and our gov's attempt to label them as terrorists. Image
2) The closed-door involvement of the US government in the GIFCT is worrying, not comforting.

Consider the FBI's investigation of the fictional Black Identity Extremist movement, and its current interrogation of protestors for connections to antifa. theintercept.com/2020/06/04/fbi…
Read 6 tweets
Jul 22, 2020
Twitter has policies that prohibit platform manipulation, violence, terrorism, harassment, and hateful conduct. But today's actions announce a number of ad-hoc decisions that introduce new vaguely defined terms. Why? Here's a short analysis:
First, again, you have to read platform announcements in conjunction with a secondary source to understand that's going on. In this case, it's this piece from @BrandyZadrozny and @oneunderscore__: nbcnews.com/tech/tech-news…
There are existing rules against platform manipulation, which covers things like spam, coordinated activity, and multiple accounts. But Twitter made these removals under a new prohibition against "coordinated harmful activity." What does this term mean? What's different?
Read 10 tweets
Jul 20, 2020
Thread on DHS' new PIA for expanding the Terrorist Screening Database to include ppl suspected of association w/"Transnational Organization Crime."

Serious concerns w/vague definitions, bad data, & wide info-sharing; Latinos are likely the most at risk.
dhs.gov/sites/default/…
Last year, a federal judge ruled that the terrorist screening database violated the rights of Americans that were on the list. Rather than scale back, this PIA covers an expansion to track even more people. Many of the same concerns apply. nytimes.com/2019/09/04/us/…
The PIA acknowledges that this new category goes beyond the initial purpose of the watchlist (terrorism). But because the President said this group of people is ALSO a national security threat, it's fine? 🤷🏽‍♂️ Image
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(