This thread is an interesting artifact, b/c Carr is one of the main figures in "2+2=5" discourse, yet this thread contains exactly & precisely the argument that gets people like @ylecun dragged by the AI ethics brigade.
LeCun even tweeted this thread approvingly, which is no surprise since it backs what was essentially his argument in the big Twitter dustup with @timnitGebru.

Prediction: Carr will have to essentially retract this thread. It won't stand, as-is.
From the perspective of the AI ethics people, this thread contains wrongthink. If LeCun himself had tweeted this identical thread, he'd right now be subject to a massive pile-on led but a handful of big accounts.

So I think Carr will get pressure to do a "clarification" at least
Carr's is a good thread, BTW. I had him blocked b/c I truly despise the whole "2+2=5" discourse, which is deeply silly, the to point that I never weighed in on it because IMO it's the Twitter equivalent of eating a Tide pod — even caring about such a stunt == you're Too Online.
I should note that Carr's point #2 isn't well stated. The models are definitely biased. They are ALWAYS biased, as is any representation of a thing in the world. This is true fundamentally. Here's a bunch of words on that exact topic:…
So I think #2 will be the main point of attack from the ethics people, because as phrased it's 1) kinda not correct, and 2) is exactly the claim that'll get you brigaded if you make it as an ML person w/ the wrong profile picture.
The point of attack on Carr's thread will be point #2, but the real target will be point #4. It is anathema to say that there is a fix for bias problems. The "fix" is supposed to be a massive overhaul of the racial & gender makeup of the entire ML field, & similar for society.
The entire edifice of "AI ethics" as it currently exists rests up on the premise that Carr's #4 is false. It is not fixable, at least not via some technical mechanisms or tweaking. The only fix is supposed to be revolution.
Remember that whole thing where the students made nail polish that, if discretely dipped in a drink, would change color in the presence of roofie drugs, but they were attacked as enabling "rape culture"?

Same pattern. Fix proposals are anathema to ppl who want to end the system.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jonst0kes

28 Mar
At some point we'll all have to reckon w/ the following issues /in combination/:

1. Society has problems
2. ML amplifies some of them
3. #2 will get worse, making #1 worse
4. Something must be done
5. The ppl yelling the loudest about 1-4 are an obstacle to #4.
We need an entirely separate, competing conversation about fairness, accountability, & transparency in ML that takes as its premise that fixes for #2 are possible & desirable independently of efforts to fix #1, and then proceeds from there.
Not only are the FAccT clique openly opposed to efforts to look for technical fixes, but they're trying to rule out discussion of technical fixes as out-of-bounds entirely.
Read 6 tweets
28 Mar
They’re increasingly being driven around the bend by his refusal to perform an elaborate apology, or even admit error. It’s pretty awesome to watch. One of the guys trying to justice him into submission is a AI higher up at Chase bank lol.
As I predicted, the place there really pushing back hard is on his claim that the problems are fixable. This claim is a direct threat to the entire AI ethics project so it can’t stand. They’ll keep at him over it.
Here’s what I’m seeing in his responses, though: he’s smart enough to see that the larger points re: societal bias are obvious, & he has enough intersectional armor that he doesn’t have to pretend they’re saying something novel/deep/complex/technical. Greatness.
Read 16 tweets
27 Mar
*spits out coffee* This is exactly what I said would happen, how I said it would happen. 😂
Notice the difference between how she handles Carr & how they have handled others in the past who've stepped on this same landmine. Anyway, Carr will apologize for this thread.
They hate fixes so much! Proposing a fix that is not "totally reform all of society" is what gets you targeted for criticism or abuse.
Read 8 tweets
26 Mar
We're all going to be thinking about this question, which I start to touch on in the post, a LOT more in the years to come.

You can't answer it though w/out 1st having an answer for the what/how of /human/ moral responsibility for human artistic output.

Spoiler: views differ.
If you don't have a working theory of human art — what it is, and of the artist's relationship to the representations they put out into the world — then good luck coming up with something comparable for machines. And there is no grand unified theory of such we can all agree on.
So I think we will not answer this question for machines, because there are too many frameworks for answering it for humans. Rather, we'll just have machine-produced cultural objects alongside human ones, & we just won't agree on how to relate to their shortcomings.
Read 6 tweets
25 Mar
After a walk, I have rethought my reaction to this paper.
Reading it was what prompted my "maybe I'll give up & become a troll" tweet. But I'm now encouraged by it.
First, the depressing aspect: a whole section of this paper traffics in ancient techbro stereotypes, plus one new one they've invented themselves. These snidely offered stereotypes are basically offered up unsupported, as if we ALL KNOW AMIRITE? And this is an ACM paper!
So I'm like, this sneering collection of techbro tropes was published by the ACM, which means the citadel has fallen. It's all over. Stick a fork in American tech leadership. But then I thought about their "Ethics Unicorn" archetype, & it hit me: they're eating their own.
Read 7 tweets
24 Mar
So I read this paper, & there's an entire subsection dedicated to a new supposed new (problematic) type of figure — the Ethics Unicorn. But no example of such a person, or even such thinking, is given. I have never encountered this. It seems like folklore.
I mean, maybe I am not in trendy enough tech circles? Maybe some of the engineers at the NYT who have a lot of opinions about what the edit staff should & shouldn't be publishing would fall into this category?
At any rate, the lone citation there is to an explainer on the "full-stack unicorn developer." This Ethics Unicorn character is left to the imagination, I guess.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!