I have a piece in @nature today on the urgent need to regulate emotion recognition tech. During the pandemic, this tech has been pushed further into schools and workplaces. We should reject the phrenological impulse, where unverified systems are used to interpret inner states.
This tweet was 5 years in the making: #AtlasofAI is out TODAY!
It’s a book on the politics and planetary costs of artificial intelligence as an extractive industry— consuming natural resources, labor, and vast quantities of data. 👉 bookshop.org/books/atlas-of…@yalepress (1/6)
To get a sense of the book, here’s an extract from the EARTH chapter, and there’s another coming shortly from the AFFECT chapter in @TheAtlantic: bit.ly/3cR3UIi (2/6)
Words cannot convey how bad this will be. Trump's HUD is moving to weaken the Fair Housing Act — making it much harder to sue for discrimination, and nearly *impossible* if algorithms are involved. It is riddled with wrongness.
@VaroonMathur Here's how outrageous the loopholes are: "A bank that rejected every loan application filed by African Americans and approved every one filed by white people would need to prove only that race or a proxy for it was not used directly in constructing its computer model." Wow, no.
@VaroonMathur But there's more. "A business could 'defeat' a claim by saying it had vetted its algorithm with a neutral third party. If an algorithm resulted in discrimination were developed by... a tech company, a bank that used it could not be held accountable for the result." Again, no.
Our biggest ever report tackles the issues of AI and accountability, after a hell of a year for the tech sector. Read the report, see our 10 recommendations ainowinstitute.org/AI_Now_2018_Re…
We're calling for regulation of AI, and in particular - facial and affect recognition
And - this is a BIG one - all tech vendors should waive corporate secrecy laws that are preventing accountability and due process in the public sector. Black box systems have no place in government.
Whoa... looks like our Anatomy of AI research spawned some viral news. A bunch of stories focused on the ‘worker cage’ patent. So here’s some context:
The Amazon cage thing is not new. The patent was filed in 2013, granted in ‘16. We dug it up in our research on all the patents that went into the Amazon Echo. You can search them too – USPTO has 'em on a public site. Here’s the cage patent: pdfpiw.uspto.gov/.piw?Docid=092…
The worker cage never got made. Instead, Amazon execs say they decided to make employees wear vests covered in sensors so they don’t get run over by robots. Ohhhhkay...