A few thoughts on citational practice and scams in the #ethicalAI space, inspired by something we discovered during my #ethNLP class today:

>>
Today's topic was "language variation and emergent bias", i.e. what happens when the training data isn't representative of the language varieties the system will be used with.

The course syllabus is here, for those following along:
faculty.washington.edu/ebender/2021_5…

>>
Week by week, we've been setting our reading questions/discussion points for the following week as we go, so that's where the questions listed for this week come from.

>>
Anyway, the definition of the term "emergent bias" was highly relevant, of course, so I was pulling up Friedman & Nissenbaum 1996:

dl.acm.org/doi/abs/10.114…

>>
In parallel, one of the students did a web search for the term, and landed on this page:

technologicalethics.org/three-kinds-of…
Some of that text looked oddly familiar! But a thorough search of the site turned up exactly zero citations to Nissenbaum & Friedman 1996 (nor any other citations, for that matter).

>>
We poked around the website a bit more, trying to figure out who is behind it, and could only find a first name ("Henry") as well as this helpful invitation to call ... but no phone number:

>> Screen cap from https://www...
On the other hand, "Henry" and colleagues are apparently selling ethics seminars, for $495-$1295, depending on the length of the seminar.

>>
And the listed partners include @MonashUni (who might prefer not to be associated with such a shady organization).

>>
Another student turned up this old tweet (from March 2020) advertising a keynote by Henry, whose last name appears to be Dobson:



>>
And from there, the Medium page, apparently by the same person:

henrydobson.medium.com

(with the tagline "tech ethicist")
I guess it's in vogue to be a "tech ethicist" these days, and of course anyone can hang out their shingle. But someone doing serious work in this space will have good citational practice, relating their work to others'.

>>
It's really easy to see that Henry Dobson is not someone to take seriously---he's flat out plagiarized Friedman & Nissenbaum!---but even if he were presenting his own original work, the lack of citations would be a red flag.

>>
In particular, if someone working in this space (and selling services no less!) appears to be unaware of the work of:

@LatanyaSweeney @jovialjoy @rajiinio @timnitGebru @merbroussard @mer__edith @ruha9 @mmitchell_ai or @safiyanoble

... I will approach their wares w/skepticism.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with ❄️Emily M. Bender❄️

❄️Emily M. Bender❄️ Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

2 Dec
“I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” -- @timnitGebru on the founding of @DAIRInstitute

washingtonpost.com/technology/202…
@timnitGebru @DAIRInstitute “how to make a large corporation the most amount of money possible and how do we kill more people more efficiently,” Gebru said. “Those are […] goals under which we’ve organized all of the funding for AI research. So can we actually have an alternative?”
bloomberg.com/news/articles/…
“AI needs to be brought back down to earth,” said Gebru, founder of DAIR. “It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. >>
Read 5 tweets
4 Nov
"Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency" from @jjvincent on the @verge:

theverge.com/2021/11/2/2275…
@jjvincent @verge In a bit more detail, here's what Microsoft says in their blog:

blogs.microsoft.com/ai/new-azure-o…

>> Screen cap of Microsoft blog reading: "That’s why Mic
@jjvincent @verge The principles are well researched and sensible, and working with their customers to ensure compliance is a laudable goal. However, it is not clear to me how GPT-3 can be used in accordance with them.

>>
Read 9 tweets
3 Nov
About once a week, I get email from someone who'd like me to take the time to personally, individually, argue with them about the contents of Bender & Koller 2020. (Or, I gather, just agree that they are right and we were wrong.)

>>
I don't answer these emails. To do so would be a disservice to, at the very least, the students to whom I do owe my time, as well as my own research and my various other professional commitments.

>>
It's not that I object to people disagreeing with me! While I am committed to my own ideas, I don't have the hubris to believe I can't be wrong.

>>
Read 10 tweets
28 Aug
@TaliaRinger Okay, so 1st some history. There was a big statistical revolution in the 90s, coming out of earlier work on ASR & statistical MT. By about 2003, Bob Moore (of MSR) was going around with a talk gloating about how over ~10yrs ACL papers went from mostly symbolic to mostly stats.
1/
@TaliaRinger That statistical NLP work was still closely coupled with understanding the shape of the problem being solved, specifically in feature engineering. Then (2010s) we got the next "invasion" from ML land (deep learning) where the idea was the computer would learn the features!

2/
@TaliaRinger Aside: As a (computational) linguist who saw both of these waves (though I really joined when the first was fully here), it was fun, in a way, to watch the established stats NLP folks be grumpy about the DL newcomers.

3/
Read 24 tweets
26 Aug
Talking with students & others the past few days has brought some clarity to the ways in which the LLMs & associated overpromises suck the oxygen out of the room for all other kinds of research.

1/
(To be super clear: the conversations I was having with students about this were of the form of "how do I navigate wanting to work on X and get it published, when the whole field seems to expect that I must use LLMs?")

2/
We seem to be in a situation where people building & promoting LLMs are vastly overclaiming what they can do:

"This understands natural language!"
"This can do open-ended conversation!"

3/
Read 11 tweets
25 May
This whole interview is so incredibly cringe! On top of completely evading the issue as @timnitGebru points out, the views of both employees and users painted here are frankly awful. 1/n
First, a "fun subject" -- really? Even if that was meant somewhat sarcastically, at *best* it belittles the real harm done to @timnitGebru , @mmitchell_ai (not to mention @RealAbril and other mistreated Google employees). Screen cap from WIRED article showing interviewer's question
But then check out Raghavan's reply. What does "famously open culture where people can be expressive" have to do with this story? What were they fired for, if not for being "expressive"?

3/n Screen cap of reply to question in prev tweet: "We have
Read 17 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(