@timnitGebru@mmitchell_ai Heartened because I am glad to see that people aren't just turning back to business-as-usual (and all credit to @timnitGebru and @mmitchell_ai for speaking out so clearly & effectively, under terrible circumstances.
@timnitGebru@mmitchell_ai Saddened, because, well see terrible circumstances. Google had SO MANY opportunities to do the right thing. @timnitGebru@mmitchell_ai
and others had put so much effort into advocacy while they were still there...
On professional societies not giving academic awards to harassers, "problematic faves", or bigots, a thread: /1
Context: I was a grad student at Stanford (in linguistics) in the 1990s, but I was clueless about Ullman. I think I knew of his textbook, but didn't know whether he was still faculty, let alone where. /2
I hadn't heard about his racist web page until this week. /3
Better late than never, I suppose, but as one of the targets of his harassment, I could have wished that you didn't embolden him and his like in the first place, nor sit by for two months while this went on.
And it's not just about "people who don't want to be contacted" FWIW. He's been spamming all kinds of folks with derogatory remarks about @timnitGebru, me, and others who stand up for us.
@timnitGebru I should have kept a tally of how much time I've spent over the last two months dealing with this crap because it pleased @JeffDean to say in a public post that our paper "didn't meet the bar" for publication.
@timnitGebru He sent it to me this morning too. I just archived it without reading it, because I figured there would be nothing of value there. (I wasn't wrong.) What is it with this guy? As you say, @sibinmohan why does he think Google needs his defense?
@timnitGebru@sibinmohan And more to the point: @timnitGebru and @mmitchell_ai (and @RealAbril and others) your shedding light on this and continuing to do so brings great value. How can we get to better corporate (& other) practices if the injustices are not widely known?
1. Process: The camera ready is done, and approved by all of the authors. If I make any changes past this point it will be literally only fixing typos/citations. No changes to content let alone the title.
2. Content: I stand by the title and the question we are asking. The question is motivated because the field has been dominated by "bigger bigger bigger!" (yes in terms of both training data and model size), with most* of the discourse only fawning over the results. >>
First, some guesses about system components, based on current tech: it will include a very large language model (akin to GPT-3) trained on huge amounts of web text, including Reddit and the like.
It will also likely be trained on sample input/output pairs, where they asked crowdworkers to create the bulleted summaries for news articles.