The next few years will see wide ranging regulation of biometrics. Our new compendium highlights lessons learned from the last decade of regulatory attempts. This summary by @ambaonadventureainowinstitute.org/regulatingbiom… captures themes & questions for the future of biometric tech:
How should biometric data be defined? Do emotion recognition systems fall within the definition of biometrics in the law?
Why have data protection laws like the #GDPR failed to curb the expansion of biometric surveillance infrastructure by governments across the world? What are the limits of this approach?
How should regulation address concerns about inaccuracy and discrimination in biometric systems?
What could be the fallout of a “risk based” regulatory approach to different use cases for biometrics (e.g. “identification” v “verification”)?
What does due process for facial recognition look like in the criminal justice system? Should law enforcement have access to biometric systems to begin with?
How can the “conditions” for lifting moratorium laws be strengthened to ensure that eventual legislative or deliberative processes are robust and democratic?
What regulatory tools could create public transparency around the development, purchase, and use of biometric systems?
What role can community-led advocacy play in shaping the priorities and impact of biometric regulation?
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Our key findings? In a global climate where austerity is the norm in many sectors, AI has nonetheless garnered rich govt investment. This is often justified by high-level claims of benefit across various domains, from healthcare, to climate, to education, and beyond. 2/6
We can’t have a conversation about the public interest case for AI without confronting tradeoffs: there is little to no evidence supporting the assumption that investing in AI in education, for eg, will be better for students than funding school lunches or after school care. 3/6
The FDA offers lessons in optimizing regulatory design for information production useful for AI given structural opacity in development and deployment.
For more read @akapczynski on the FDA's role in info production:
@akapczynski Lack of consensus on what counts as efficacy is a powerful entry point for regulating AI.
There will always be potential harms; we must consider whether benefits outweigh harms. To know this, we need clearer, more specific insight into how AI systems work & who they benefit.
In response to the @Europarl_EN request, @theodorajewell tackles the question: "How can AI be deployed to benefit society, advance research, and accelerate our climate transition?" –– Here are some takeaways (1/7):
We must ask questions: Who is developing this technology, for whom, for what, and with what data? Who profits? And then as a corollary: Who holds sovereignty and ownership rights to the data; what is at stake, and for whom, in the resulting applications? (2/7)
We must clarify the terms — what, exactly, is meant by artificial intelligence? Promises made that AI will ‘solve’ the climate crisis are in direct opposition to the role AI plays in perpetuating climate injustice. (3/7)
Many algorithmic systems continue to exhibit bias and errors, yet governments still use them to make life-changing decisions. Our new report with @RaceNYU looks at those taking the fight to court—what’s working, what’s not, and where we need to focus next: ainowinstitute.org/litigatingalgo…
@RaceNYU Recommendation 1: Assess the success of litigation by measuring structural change within government agencies and their programs, rather than through isolated or narrow changes to specific ADS.
Recommendation 2: Consider litigation as a rallying point and megaphone to amplify the voices of those personally impacted by ADS.
We’re starting off our #AINow2018 Symposium with a year in review by @katecrawford@mer__edith hitting on some key questions—How do we ensure ethics in tech? How do we create accountability over our tools? And how do we organize for it all?
“We’re gonna give you a tour of what happened this year” says @katecrawford, presenting visualization of data breaches from Cambridge Analytics to Facebook, organizing against Amazon’s Rekognition & Google’s Maven, tool supporting ICE from Palantir, & more.
@katecrawford mentions the first self-driving car fatalities, an explosion in facial recognition tech, Facebook ads violating fair housing rules,
“That’s a tiny sample of what has been a hell of a year.”