At some point we'll all have to reckon w/ the following issues /in combination/:

1. Society has problems
2. ML amplifies some of them
3. #2 will get worse, making #1 worse
4. Something must be done
5. The ppl yelling the loudest about 1-4 are an obstacle to #4.
We need an entirely separate, competing conversation about fairness, accountability, & transparency in ML that takes as its premise that fixes for #2 are possible & desirable independently of efforts to fix #1, and then proceeds from there.
Not only are the FAccT clique openly opposed to efforts to look for technical fixes, but they're trying to rule out discussion of technical fixes as out-of-bounds entirely.
The only context in which they'll permit technical discussions of bias mitigation, is in service of proving that it cannot be done because the real problem is that bias is everywhere & we have to reform all of society. The only permitted conclusion is revolution.
I've made this point before, but this pattern is identical to the one that shows up in the police reform discussion. The goal police abolitionists is to mark talk of improving policing as out-of-bounds, on the grounds that it is incrementalism & not revolution.
The proper term for the folks I'm referring to in #5 in the OT is: "accelerationists." That's the word for those who oppose fixes & incremental improvements to the system, because they want to hasten the collapse & rebuilding of the entire structure, & I should start using it.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jonst0kes

28 Mar
They’re increasingly being driven around the bend by his refusal to perform an elaborate apology, or even admit error. It’s pretty awesome to watch. One of the guys trying to justice him into submission is a AI higher up at Chase bank lol.
As I predicted, the place there really pushing back hard is on his claim that the problems are fixable. This claim is a direct threat to the entire AI ethics project so it can’t stand. They’ll keep at him over it.
Here’s what I’m seeing in his responses, though: he’s smart enough to see that the larger points re: societal bias are obvious, & he has enough intersectional armor that he doesn’t have to pretend they’re saying something novel/deep/complex/technical. Greatness.
Read 16 tweets
27 Mar
*spits out coffee* This is exactly what I said would happen, how I said it would happen. 😂
Notice the difference between how she handles Carr & how they have handled others in the past who've stepped on this same landmine. Anyway, Carr will apologize for this thread.
They hate fixes so much! Proposing a fix that is not "totally reform all of society" is what gets you targeted for criticism or abuse.
Read 8 tweets
27 Mar
This thread is an interesting artifact, b/c Carr is one of the main figures in "2+2=5" discourse, yet this thread contains exactly & precisely the argument that gets people like @ylecun dragged by the AI ethics brigade.
LeCun even tweeted this thread approvingly, which is no surprise since it backs what was essentially his argument in the big Twitter dustup with @timnitGebru.

Prediction: Carr will have to essentially retract this thread. It won't stand, as-is.
From the perspective of the AI ethics people, this thread contains wrongthink. If LeCun himself had tweeted this identical thread, he'd right now be subject to a massive pile-on led but a handful of big accounts.

So I think Carr will get pressure to do a "clarification" at least
Read 10 tweets
26 Mar
We're all going to be thinking about this question, which I start to touch on in the post, a LOT more in the years to come.

You can't answer it though w/out 1st having an answer for the what/how of /human/ moral responsibility for human artistic output.

Spoiler: views differ.
If you don't have a working theory of human art — what it is, and of the artist's relationship to the representations they put out into the world — then good luck coming up with something comparable for machines. And there is no grand unified theory of such we can all agree on.
So I think we will not answer this question for machines, because there are too many frameworks for answering it for humans. Rather, we'll just have machine-produced cultural objects alongside human ones, & we just won't agree on how to relate to their shortcomings.
Read 6 tweets
25 Mar
After a walk, I have rethought my reaction to this paper.
Reading it was what prompted my "maybe I'll give up & become a troll" tweet. But I'm now encouraged by it.
First, the depressing aspect: a whole section of this paper traffics in ancient techbro stereotypes, plus one new one they've invented themselves. These snidely offered stereotypes are basically offered up unsupported, as if we ALL KNOW AMIRITE? And this is an ACM paper!
So I'm like, this sneering collection of techbro tropes was published by the ACM, which means the citadel has fallen. It's all over. Stick a fork in American tech leadership. But then I thought about their "Ethics Unicorn" archetype, & it hit me: they're eating their own.
Read 7 tweets
24 Mar
So I read this paper, & there's an entire subsection dedicated to a new supposed new (problematic) type of figure — the Ethics Unicorn. But no example of such a person, or even such thinking, is given. I have never encountered this. It seems like folklore.
I mean, maybe I am not in trendy enough tech circles? Maybe some of the engineers at the NYT who have a lot of opinions about what the edit staff should & shouldn't be publishing would fall into this category?
At any rate, the lone citation there is to an explainer on the "full-stack unicorn developer." This Ethics Unicorn character is left to the imagination, I guess.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!