"Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency" from @jjvincent on the @verge:
@jjvincent@verge The principles are well researched and sensible, and working with their customers to ensure compliance is a laudable goal. However, it is not clear to me how GPT-3 can be used in accordance with them.
>>
Transparency, for example, would require detailed information about GPT-3's training data, for both customers and end users.
>>
Transparency, by their (good) definition, also includes being "open about the limitations of their systems".
And yet, the press release repeats hype and overclaims about what GPT-3 does (1/3):
"users have discovered countless things that these AI models can do with their powerful and comprehensive understanding of language."
>>
And yet, the press release repeats hype and overclaims about what GPT-3 does (2/3):
"GPT-3’s capability to generate original content and its understanding of what’s happening in the game"
>>
And yet, the press release repeats hype and overclaims about what GPT-3 does (3/3):
"GPT-3 is part of a new class of models that can be customized to handle a wide variety of use cases that require a deep understanding of language"
>>
So right there in the announcement, while pointing to their principles, they're violating them. I admire the work of the folks on the FATE team at Microsoft, but I'm afraid not enough folks are actually listening to them.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
About once a week, I get email from someone who'd like me to take the time to personally, individually, argue with them about the contents of Bender & Koller 2020. (Or, I gather, just agree that they are right and we were wrong.)
>>
I don't answer these emails. To do so would be a disservice to, at the very least, the students to whom I do owe my time, as well as my own research and my various other professional commitments.
>>
It's not that I object to people disagreeing with me! While I am committed to my own ideas, I don't have the hubris to believe I can't be wrong.
@TaliaRinger Okay, so 1st some history. There was a big statistical revolution in the 90s, coming out of earlier work on ASR & statistical MT. By about 2003, Bob Moore (of MSR) was going around with a talk gloating about how over ~10yrs ACL papers went from mostly symbolic to mostly stats.
1/
@TaliaRinger That statistical NLP work was still closely coupled with understanding the shape of the problem being solved, specifically in feature engineering. Then (2010s) we got the next "invasion" from ML land (deep learning) where the idea was the computer would learn the features!
2/
@TaliaRinger Aside: As a (computational) linguist who saw both of these waves (though I really joined when the first was fully here), it was fun, in a way, to watch the established stats NLP folks be grumpy about the DL newcomers.
3/
Talking with students & others the past few days has brought some clarity to the ways in which the LLMs & associated overpromises suck the oxygen out of the room for all other kinds of research.
1/
(To be super clear: the conversations I was having with students about this were of the form of "how do I navigate wanting to work on X and get it published, when the whole field seems to expect that I must use LLMs?")
2/
We seem to be in a situation where people building & promoting LLMs are vastly overclaiming what they can do:
"This understands natural language!"
"This can do open-ended conversation!"
3/
This whole interview is so incredibly cringe! On top of completely evading the issue as @timnitGebru points out, the views of both employees and users painted here are frankly awful. 1/n
First, a "fun subject" -- really? Even if that was meant somewhat sarcastically, at *best* it belittles the real harm done to @timnitGebru , @mmitchell_ai (not to mention @RealAbril and other mistreated Google employees).
But then check out Raghavan's reply. What does "famously open culture where people can be expressive" have to do with this story? What were they fired for, if not for being "expressive"?
Currently reading the latest NSF/Amazon call (NSF 21-585) which goes out of its way to say interdisciplinary perspectives are crucial and that "this program supports the conduct of fundamental computer science research".
So, is interdisciplinary work on "AI" actually "fundamental computer science research"?
"The lead PI on each proposal must bring computer science expertise to the research. Computationally focused research efforts informed by socio-technical and social behavioral needs of the field are broadly encouraged." Was this thing written by a committee?
Wow this article covers a lot of ground! Seems like a good way for folks interested in "AI ethics" and what that means currently to get a quick overview.
"Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.
That questioning is made all the more urgent because of scale."