I’ve picked up a bunch of new followers in the past few days, and I suspect many of you are here because you’re interested in what I might have to say about Google and Dr. @TimnitGebru. So, here’s what I have to say:
Dr. @TimnitGebru is a truly inspiring scholar and leader. Working with her these past few months has been an absolute highlight for me:
I expected it to be awesome working with Timnit, and I was right!
The paper started as a conversation between the two of us but expanded to include a larger group, representing different research traditions, to provide a multi-viewpoint investigation of the possible risks of large language models.
Google used the paper as a catalyst, but it’s hard to see how it’s really about the paper. (If their goal was to squash this paper or at least not associate it with Google, well…)
A best-case scenario outcome then is that this story, Timnit’s bravery and clarity in speaking out, and the attention it is getting will serve as a catalyst.
To the vast majority of my followers who aren’t Black women: The next time a Black woman at your workplace tells you something you don’t want to hear, are you ready to listen?
A secondary story here (and the one I’m most often asked about) is what Google’s attempt to squash the paper means for AI research and especially AI ethics research, given that so much of it takes place in industry.
My take is that moving to a future where the tech we build addresses and uproots systems of oppression (instead of replicating or further entrenching them) is an all-hands-on-deck task:
We need researchers like Dr. Gebru and the team she has built at Google working in an environment that gives them visibility into the corporate processes, but with the freedom to turn a critical lens wherever it is needed.
We need people engaged in product development who both have the skills to envision possible adverse impacts and are empowered by their corporations to bring that input to development teams. See @RadicalAIPod episode w/ @baxterkb for a great exploration:
We need people in the academy engaged in this research free from corporate incentives but also training the next generation of technologists to understand their work in its social context.
And finally, we need people who know how to design effective regulation to be informed by those who understand how the tech works and how it impacts people AND people educating the general public on what to advocate for.
If brilliant scholars like Dr. @timnitGebru have to put all their energy into fighting workplace discrimination, it’s going to take so much longer to get to where we need to be.
Google using our paper as an excuse to fire Dr. Gebru is a set-back, for sure: at the very least it inhibits her team’s ability to pursue their excellent work.
The best-case outcome I can imagine on this point is Google taking concrete steps to support the scholars on Dr. Gebru’s team and setting up MUCH clearer processes around the approval of publications, which make clear that it’s only about checking for disclosure of IP.
But I don’t see how they can possibly to that without owning the fact that it wasn’t the paper, admitting that she didn’t resign, and coming clean on the actual reason they fired her. Which, also, would be part of my best-case outcome.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I was feeling a little ranty this morning, but there's actually also some interesting points about context and pragmatics here, for when we write (or cause machines to write) text that will be interpreted in contexts we aren't directly participating in:
Surely, from the platform's point of view, WeCNLP is starting at 7am. For them, WeCNLP refers to an event with "start" and "end" times they have to program into their platform, so that people who have registered can access the platform during those times. >>
But for people *attending* WeCNLP (the addressees of that email), WeCNLP refers to an event with a specific internal schedule, of talks and informal meeting times. >>
Is anyone else on the West Coast already up and surprised to see an email from #WeCNLP2020 saying the event starts in "59 minutes" when the schedule says 8am start?
Seems like an auto-generated system from the online platform because the site is opening at 7, though the program doesn't start until 8.
Given that this one is WEST COAST NLP and for once is actually in our timezone, it would be nice to not be harassed by emails making us feel late...
The authors look deep into a use case for text that is ungrounded in either the world or any commitment what's being communicated but nonetheless fluent, apparently coherent, and of a specified style. You know, exactly #GPT3's specialty.
2/n
What's that use case? The kind of text needed, and apparently needed in quantity, for discussion boards whose purpose is recruitment and entrenchment in extremist ideologies.
3/n
@robiupui I totally get that this is the most difficult option in many ways, but also I might be able to help because I have been teaching in this format since 2007, in order to accommodate both distance & local students in the MS program that I run. >>
1 Instructor audible to remote & local students
2 Slides visible to remote & local students
3 Instructor gestures visible to remote & local students
4 All students can ask questions/answer questions/comment
5 All students can hear student contributions
@robiupui 6 Remote students can turn in homework/take exams
7 Remote students can connect with out of class help (office hours, bulletin boards)
8 Remote students can collaborate with others (local or not).
6-8 are easy though and I imagine not what you're asking about. >>
For slightly tedious reasons, I've ended up with the homework assignment of finding scholarly literature analyzing how so-called IQ tests are biased. I've one some digging and that literature is a *mess*, so I'm reaching out here in case anyone has appropriate cites to hand. >>
From the fact that IQ tests are one tool used to push Black children in the US into special ed classrooms [1] and the fact that the whole history of IQ testing is bound up with eugenics [2] >>
... the obvious conclusion is that population-level differences in IQ scores are due in large part to the IQ test itself either being a situation that is more familiar/normative for white kids or involving questions that draw on white culture (while pretending to be neutral). >>
Tonight in briefly got tangled in a rather pointless Twitter argument about #AIethics—pointless because the other party isn’t interested in discussing in good faith. One point of clarity has come out of that though, in the responses of a few of us and I want to pull it out here:
Ethical considerations are not a separate concern from whether the AI research is “sound”: sound AI research requires not only valid math but also sensible task design.
A lot of the ethically questionable things we’re seeing (predicting faces from voices, predicting “intellectual ability” from short text responses, etc) are cases where it doesn’t make sense to predict A from B, by any model.