Better late than never, I suppose, but as one of the targets of his harassment, I could have wished that you didn't embolden him and his like in the first place, nor sit by for two months while this went on.
And it's not just about "people who don't want to be contacted" FWIW. He's been spamming all kinds of folks with derogatory remarks about @timnitGebru, me, and others who stand up for us.
@timnitGebru I should have kept a tally of how much time I've spent over the last two months dealing with this crap because it pleased @JeffDean to say in a public post that our paper "didn't meet the bar" for publication.
@timnitGebru@JeffDean The whole point of peer review is to determine what does and doesn't merit publication. We'll be presenting the paper at #FAccT2021 next week, TYVM. We didn't need additional "reviews" from the ML fanboys and trolls you sent our way.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
@timnitGebru He sent it to me this morning too. I just archived it without reading it, because I figured there would be nothing of value there. (I wasn't wrong.) What is it with this guy? As you say, @sibinmohan why does he think Google needs his defense?
@timnitGebru@sibinmohan And more to the point: @timnitGebru and @mmitchell_ai (and @RealAbril and others) your shedding light on this and continuing to do so brings great value. How can we get to better corporate (& other) practices if the injustices are not widely known?
1. Process: The camera ready is done, and approved by all of the authors. If I make any changes past this point it will be literally only fixing typos/citations. No changes to content let alone the title.
2. Content: I stand by the title and the question we are asking. The question is motivated because the field has been dominated by "bigger bigger bigger!" (yes in terms of both training data and model size), with most* of the discourse only fawning over the results. >>
First, some guesses about system components, based on current tech: it will include a very large language model (akin to GPT-3) trained on huge amounts of web text, including Reddit and the like.
It will also likely be trained on sample input/output pairs, where they asked crowdworkers to create the bulleted summaries for news articles.
To all of those who are "both-sides-ing" this: we see you. It takes courage, guts, and fortitude to speak out in the face of oppression, knowing that no matter how gently you make the point people will think you're "too angry".
I'm glad that #UWAllen publicly disavowed Domingos's meltdown. It's disheartening to see so many folks reacting to that with: well what about @AnimaAnandkumar?
Just how angry is the right amount of angry, when faced with racism, misogyny, misogynoir, gaslighting, etc? Furious is the right amount.
"Aside from turning the paper viral, the incident offered a shocking indication of how little Google can tolerate even mild pushback, and how easily it can shed all pretense of scientific independence." Thank you, @mathbabedotorg
@mathbabedotorg Re: "Embarrassing as this episode should be for Google — the company’s CEO has apologized — I’m hoping policy makers grasp the larger lesson."
Totally agree on the main pts (about policy makers and about it being embarrassing). It doesn't seem to me that he actually apologized.
I’ve picked up a bunch of new followers in the past few days, and I suspect many of you are here because you’re interested in what I might have to say about Google and Dr. @TimnitGebru. So, here’s what I have to say:
Dr. @TimnitGebru is a truly inspiring scholar and leader. Working with her these past few months has been an absolute highlight for me: