They’re increasingly being driven around the bend by his refusal to perform an elaborate apology, or even admit error. It’s pretty awesome to watch. One of the guys trying to justice him into submission is a AI higher up at Chase bank lol.
As I predicted, the place there really pushing back hard is on his claim that the problems are fixable. This claim is a direct threat to the entire AI ethics project so it can’t stand. They’ll keep at him over it.
Here’s what I’m seeing in his responses, though: he’s smart enough to see that the larger points re: societal bias are obvious, & he has enough intersectional armor that he doesn’t have to pretend they’re saying something novel/deep/complex/technical. Greatness.
This is making the weekend of everyone who has been bullied and dogpiled on here for making similar points. Carr’s woke creds are impeccable, he can read the linked papers & understand them, he can spot weak work, & he’s just not playing along. Nobody else has this combo.
There are all kinds of shenanigans going on in his replies & in the field that rely on ppl falling into one of three camps: 1) not sophisticated enough to spot the bait-&-switch, or 2) aligned & therefore unwilling to call it out, or 3) terrified into silence.
I'm going to risk a block here by taking Chase guy's responses to his thread & showing how the game works. The point that society is biased is obvious, Carr says it's obvious, & it's also not the kind of (statistical bias) he's talking about. Next!
LOL no. All of the attempts to demonstrate that the algos themselves introduce racism — all of them, without exception — reduce to: algo amplifies signal w/ more representation in the data, & suppresses signal w/ less representation. That's all of it.

Anyone who takes the time to read the papers cited knows this. I go into it in detail in my opening newsletter post. Models amplify & suppress, as do maps, sculptures, writing, etc. There's now a cottage industry in mapping this point to different domains…
This response is just gold. It's both A) clearly not what Carr meant by "obvious," and B) runs afoul of the fact that this same crowd also argues how problematic it is to label ppl's identity characteristics absent info re: their self-ID.
Finally, everyone knows this. Carr surely knows it. It's unfortunate that you can get clout on Twitter & in "AI ethics" generally by repeating it as if it's some devastating & novel insight that few are aware of.
Anyway, this is exactly what Chen is doing here from his opening tweet, & Carr just call him on it directly. You just never see this from someone with Carr's status. Amazing. Inject it straight into my veins.
WTH man I am now a huge fan of Kareem Carr. We don't actually agree on anything that matters EXCEPT for how to have an argument with other people. But I think that is enough. Actually, maybe that's everything.
Chase guy has now protected his account. That's ok, I have the tweets here. Here are the tweets of his I'm replying to, in the order I'm replying.
This is great. This is exactly how this entire game works.
Ok, the gloves are coming off. Now the same crowd that came for LeCun is openly coming for Carr. As I said, they will not let "it's fixable" stand. As long as Carr stays focused on the substance & on specifics, they can't touch him. It's all a bunch of elaborate derailing tactics
I had planned to work on something totally different Monday for the next newsletter, but I think I should do a deep dive on how the bias argument is working in the AI ethics field. Because there's a bunch of derailing & misdirection happening.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jonst0kes

28 Mar
At some point we'll all have to reckon w/ the following issues /in combination/:

1. Society has problems
2. ML amplifies some of them
3. #2 will get worse, making #1 worse
4. Something must be done
5. The ppl yelling the loudest about 1-4 are an obstacle to #4.
We need an entirely separate, competing conversation about fairness, accountability, & transparency in ML that takes as its premise that fixes for #2 are possible & desirable independently of efforts to fix #1, and then proceeds from there.
Not only are the FAccT clique openly opposed to efforts to look for technical fixes, but they're trying to rule out discussion of technical fixes as out-of-bounds entirely.
Read 6 tweets
27 Mar
*spits out coffee* This is exactly what I said would happen, how I said it would happen. 😂
Notice the difference between how she handles Carr & how they have handled others in the past who've stepped on this same landmine. Anyway, Carr will apologize for this thread.
They hate fixes so much! Proposing a fix that is not "totally reform all of society" is what gets you targeted for criticism or abuse.
Read 8 tweets
27 Mar
This thread is an interesting artifact, b/c Carr is one of the main figures in "2+2=5" discourse, yet this thread contains exactly & precisely the argument that gets people like @ylecun dragged by the AI ethics brigade.
LeCun even tweeted this thread approvingly, which is no surprise since it backs what was essentially his argument in the big Twitter dustup with @timnitGebru.

Prediction: Carr will have to essentially retract this thread. It won't stand, as-is.
From the perspective of the AI ethics people, this thread contains wrongthink. If LeCun himself had tweeted this identical thread, he'd right now be subject to a massive pile-on led but a handful of big accounts.

So I think Carr will get pressure to do a "clarification" at least
Read 10 tweets
26 Mar
We're all going to be thinking about this question, which I start to touch on in the post, a LOT more in the years to come.

You can't answer it though w/out 1st having an answer for the what/how of /human/ moral responsibility for human artistic output.

Spoiler: views differ.
If you don't have a working theory of human art — what it is, and of the artist's relationship to the representations they put out into the world — then good luck coming up with something comparable for machines. And there is no grand unified theory of such we can all agree on.
So I think we will not answer this question for machines, because there are too many frameworks for answering it for humans. Rather, we'll just have machine-produced cultural objects alongside human ones, & we just won't agree on how to relate to their shortcomings.
Read 6 tweets
25 Mar
After a walk, I have rethought my reaction to this paper.
Reading it was what prompted my "maybe I'll give up & become a troll" tweet. But I'm now encouraged by it.
First, the depressing aspect: a whole section of this paper traffics in ancient techbro stereotypes, plus one new one they've invented themselves. These snidely offered stereotypes are basically offered up unsupported, as if we ALL KNOW AMIRITE? And this is an ACM paper!
So I'm like, this sneering collection of techbro tropes was published by the ACM, which means the citadel has fallen. It's all over. Stick a fork in American tech leadership. But then I thought about their "Ethics Unicorn" archetype, & it hit me: they're eating their own.
Read 7 tweets
24 Mar
So I read this paper, & there's an entire subsection dedicated to a new supposed new (problematic) type of figure — the Ethics Unicorn. But no example of such a person, or even such thinking, is given. I have never encountered this. It seems like folklore.
I mean, maybe I am not in trendy enough tech circles? Maybe some of the engineers at the NYT who have a lot of opinions about what the edit staff should & shouldn't be publishing would fall into this category?
At any rate, the lone citation there is to an explainer on the "full-stack unicorn developer." This Ethics Unicorn character is left to the imagination, I guess.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!