I'm live-tweeting from NY City Council this afternoon for an updated hearing on the #NYCAlgorithm's Transparency Task Force.
CM Koo is walking through the problems with government use of algorithms: the public doesn't know when they're being used, and they depend on biased assumptions and flawed data sets.
The ADS Task force was empaneled to study government use of algorithms, and to come up with recommendations for minimizing harms. But this is the first public hearing since the law passed a year ago.
The first panel is from the task force's co-chairs, you can see the whole task force here: www1.nyc.gov/site/adstaskfo…
The chair says it's been more difficult than anticipated to determine which "automated decision systems" the task force will work on.
Chair says the task force won't create a list of algorithms in use by the city. It'll empower agencies to do citywide assessments.
Task force is kicking off public engagement by holding 2 hearings on April 30, and May 30 (both in Manhattan), along with events during the summer. These are intended to be forums for impacted individuals to present their concerns.
CM Koo is asking why the Task Force isn't making its minutes public, and points to Vermont's approach to posting public agendas and minutes. Chairs say they're trying to create a safe space for frank conversations.
Koo is asking why City Council isn't allowed to attend. Getting similar answers. Pointing to the public forums as the space for the public to be involved. Offers to schedule City Council briefings as well.
Chairs are saying the Task Force isn't looking a individual algorithms specifically.
Basically: the task force can't agree on what an automated decision system is. Doesn't speak well for what their written report will address.
The term “automated decision system” means computerized
implementations of algorithms, including those derived from machine learning or other data processing or artificial intelligence techniques, which are used to make or assist in making decisions.
The report is due November of this year, but the Task Force is holding their first public forums this summer. Short timeline to meaningfully incorporate concerns from impacted communities.
The Task Force says they've read advocacy letters from our #NYCAlgorithms Coalition informed the membership of the task force and led to increased public engagement.
@janethaven says the task force needs direct access to the algorithms used by agencies. What's fair, accountable, and transparent means different things in different settings (e.g. criminal justice, education, housing)
🚨If the city was just going to get generalities, there'd be no need for a task force.
Recs from Rashida Richardson of @AINowInstitute.
- this committee needs to be an oversight body because of how little engagement we've received
- our letters have specific recommendations, if the process doesn't go well, we hope this is a model for moving forward. (cont.)
- concerned that the task force is proceeding without context. without addressing specific algorithms, you can't make meaningful recommendations.
For example, we know the city is looking at pretrial risk assessment, but the issues that attach to every product are different.
@CahnLawNY: you cannot build a roadmap for the future if you don't know where you are today. Task Force and the public needs an understanding of how they're being used.
This is a tough subject: we need better ways of making it clear how much this touches on so many aspects of New Yorkers' lives. The Task Force hasn't taken enough steps to engage the public and make the stakes clear.
. @eric_ulrichasking about the pretrial risk assessment. Rashida says mass incarceration of black and brown NYers can impact how a given algorithm views individual defendants as a "risk."
Rashida also says the costs of these systems can be significant. For example, public benefit algorithms exist can help determine who gets SNAP. There've been lawsuits in other localities, and they're scrambling to fix the problem. omaha.com/news/nebraska/…
.@CahnLawNY: all AI is biased, the goal is to reduce it. AI as a class is no different from human decision-makers.
CM Ulrich is asking which vendors the city is using. Wouldn't it be great if the Task Force could tell us?
.@bradlander is asking about using algorithms to fight the problem of reckless driving. @CahnLawNY says that a system based on camera footage begs the question of where those cameras are, and who's being surveilled. This is the issue spotting we need agencies to consider.
We need validation or bias studies for the government use of algorithms. City agencies can require this of every vendor.
CM Koo is asking about similar task forces. Vermont created a statewide task force, there are comparable groups in Washington and Massachusetts, Pennsylvania, and California. Where we were once a leader, we've fallen behind other municipalities regarding public engagement.
.@BetaNYC raising similar concerns about lack of public engagement. Advising the task force to update the website, share more info (agendas, timelines, public events calendar). Also recommending a public glossary of terms so the public can better engage during their meetings.
Next up @ITI_TechTweets calling for sustained engagement across public and private sector, including beyond the scheduled public engagement meetings.
CM Koo asks if source code should be made available. @ITI_TechTweets doesn't like that idea, thinks source code should be protected. @BetaNYC pushes back, and wants algorithms to be accountable.
Next up two task force members issuing their frustrations with not being able to analyze specific algorithms. Recs:
- if needed, Council should change the law to allow the task force to demand access to specific systems
- amend the law to give the task force more time
Above was joint testimony from Julia Stoyanovich and Solon Barocas.
Today's hearing is a reality check that meaningful transparency and accountability from government will never come easily.
Without more pressure from the public and City Council, we will have no better understanding of any specific algorithm being used by city agencies.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
After the death of a local teen, grieving classmates wore lanyards, said his name, & filmed music videos. NYPD labeled them a gang.
Today, 31 organizations and academics call on the NYPD Inspector General to audit the NYPD's gang database. brennancenter.org/our-work/resea…
We believe the gang database’s vague and subjective standards make it unreliable as an investigative tool and result in ongoing discrimination against Black and Latinx New Yorkers. slate.com/technology/202…
The racial bias of the gang database is uncontested: NYPD testified it is 97.7% Black or Latino.
Under the guise of gang policing, the NYPD is continuing the same discriminatory policing that fueled their illegal stop-and-frisk program. theintercept.com/2019/06/28/nyp…
The basics: ALPRs use cameras and software to scan the plates of every car that passes by. They can log the time and date, GPS coordinates, and pictures of the car. Some versions can even snap pictures of a car’s occupants and create unique vehicle IDs. theintercept.com/2019/07/09/sur…
In 1 week, the LAPD scanned more than 320 mil plates. Private companies like Vigilant Solutions sell cops (and ICE) access to their private database of billions of scans, while Flock Safety sells ALPRs to paranoid homeowners and lets them share with police cnet.com/news/license-p…
THREAD: I analyzed Citizen's contact tracing app when they were pitching it to NYC. Unsurprisingly, its approach to privacy is terrible, continues to encourage paranoia-as-a-service, and has wide latitude for law enforcement access.
This app collects A LOT of personal information, including location data, copies of gov-ID, COVID-19 diagnosis information, and undefined “health information.” They only commit to deleting Bluetooth data & gov-id in 30 days. Nothing else is subject to any regular deletion policy.
Location data is hard to anonymize, but Citizen isn't really interested that. They'll show you a map that makes it easy to re-identify a sick person.
This creates a dangerous opportunity for exposing people’s identities and subjecting them to online/offline harassment.
Great piece, featuring very important points raised by leading thinkers in this space.
I would raise a few more, with a focus on the US and its marginalized communities: slate.com/technology/202…
1) Most GIFCT removals are for "glorification." That can capture a broad swath of content, incl. general sympathies with a group or debate about its grievances.
If that sounds fine, consider your own support for BLM or antifa, and our gov's attempt to label them as terrorists.
2) The closed-door involvement of the US government in the GIFCT is worrying, not comforting.
Consider the FBI's investigation of the fictional Black Identity Extremist movement, and its current interrogation of protestors for connections to antifa. theintercept.com/2020/06/04/fbi…
Twitter has policies that prohibit platform manipulation, violence, terrorism, harassment, and hateful conduct. But today's actions announce a number of ad-hoc decisions that introduce new vaguely defined terms. Why? Here's a short analysis:
There are existing rules against platform manipulation, which covers things like spam, coordinated activity, and multiple accounts. But Twitter made these removals under a new prohibition against "coordinated harmful activity." What does this term mean? What's different?
Thread on DHS' new PIA for expanding the Terrorist Screening Database to include ppl suspected of association w/"Transnational Organization Crime."
Serious concerns w/vague definitions, bad data, & wide info-sharing; Latinos are likely the most at risk. dhs.gov/sites/default/…
Last year, a federal judge ruled that the terrorist screening database violated the rights of Americans that were on the list. Rather than scale back, this PIA covers an expansion to track even more people. Many of the same concerns apply. nytimes.com/2019/09/04/us/…
The PIA acknowledges that this new category goes beyond the initial purpose of the watchlist (terrorism). But because the President said this group of people is ALSO a national security threat, it's fine? 🤷🏽♂️