“I’ve been frustrated for a long time about the incentive structures that we have in place and how none of them seem to be appropriate for the kind of work I want to do,” -- @timnitGebru on the founding of @DAIRInstitute
@timnitGebru@DAIRInstitute “how to make a large corporation the most amount of money possible and how do we kill more people more efficiently,” Gebru said. “Those are […] goals under which we’ve organized all of the funding for AI research. So can we actually have an alternative?” bloomberg.com/news/articles/…
“AI needs to be brought back down to earth,” said Gebru, founder of DAIR. “It has been elevated to a superhuman level that leads us to believe it is both inevitable and beyond our control. >>
When AI research, development and deployment is rooted in people and communities from the start, we can get in front of these harms and create a future that values equity and humanity.” -- @timnitGebru, on the founding of @DAIRInstitute
@timnitGebru@DAIRInstitute Thank you, @timnitGebru for persevering and for your tireless work making this vision a reality. This is going to be amazing! (And it's going to be amazing at an appropriate, humane, livable pace.)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
A few thoughts on citational practice and scams in the #ethicalAI space, inspired by something we discovered during my #ethNLP class today:
>>
Today's topic was "language variation and emergent bias", i.e. what happens when the training data isn't representative of the language varieties the system will be used with.
Week by week, we've been setting our reading questions/discussion points for the following week as we go, so that's where the questions listed for this week come from.
"Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency" from @jjvincent on the @verge:
@jjvincent@verge The principles are well researched and sensible, and working with their customers to ensure compliance is a laudable goal. However, it is not clear to me how GPT-3 can be used in accordance with them.
About once a week, I get email from someone who'd like me to take the time to personally, individually, argue with them about the contents of Bender & Koller 2020. (Or, I gather, just agree that they are right and we were wrong.)
>>
I don't answer these emails. To do so would be a disservice to, at the very least, the students to whom I do owe my time, as well as my own research and my various other professional commitments.
>>
It's not that I object to people disagreeing with me! While I am committed to my own ideas, I don't have the hubris to believe I can't be wrong.
@TaliaRinger Okay, so 1st some history. There was a big statistical revolution in the 90s, coming out of earlier work on ASR & statistical MT. By about 2003, Bob Moore (of MSR) was going around with a talk gloating about how over ~10yrs ACL papers went from mostly symbolic to mostly stats.
1/
@TaliaRinger That statistical NLP work was still closely coupled with understanding the shape of the problem being solved, specifically in feature engineering. Then (2010s) we got the next "invasion" from ML land (deep learning) where the idea was the computer would learn the features!
2/
@TaliaRinger Aside: As a (computational) linguist who saw both of these waves (though I really joined when the first was fully here), it was fun, in a way, to watch the established stats NLP folks be grumpy about the DL newcomers.
3/
Talking with students & others the past few days has brought some clarity to the ways in which the LLMs & associated overpromises suck the oxygen out of the room for all other kinds of research.
1/
(To be super clear: the conversations I was having with students about this were of the form of "how do I navigate wanting to work on X and get it published, when the whole field seems to expect that I must use LLMs?")
2/
We seem to be in a situation where people building & promoting LLMs are vastly overclaiming what they can do:
"This understands natural language!"
"This can do open-ended conversation!"
3/
This whole interview is so incredibly cringe! On top of completely evading the issue as @timnitGebru points out, the views of both employees and users painted here are frankly awful. 1/n
First, a "fun subject" -- really? Even if that was meant somewhat sarcastically, at *best* it belittles the real harm done to @timnitGebru , @mmitchell_ai (not to mention @RealAbril and other mistreated Google employees).
But then check out Raghavan's reply. What does "famously open culture where people can be expressive" have to do with this story? What were they fired for, if not for being "expressive"?