About once a week, I get email from someone who'd like me to take the time to personally, individually, argue with them about the contents of Bender & Koller 2020. (Or, I gather, just agree that they are right and we were wrong.)

>>
I don't answer these emails. To do so would be a disservice to, at the very least, the students to whom I do owe my time, as well as my own research and my various other professional commitments.

>>
It's not that I object to people disagreeing with me! While I am committed to my own ideas, I don't have the hubris to believe I can't be wrong.

>>
Rather, it's the format that I object to. If said people want to write their own papers, and send them through peer review, great! The conversation that is science continues.

>>
(For context, I should add: these are not people I already know. They are people contacting me out of the blue and demanding my time.)

>>
But it has led me to reconsider a bit of my "contacting me" page. Where previously it said "Like (almost) all academics, I'm always delighted to discuss my research with someone who has read my work."

>>
It turns out that that's not really true. At least the "always" is clearly an overstatement. (Though, in fairness, some of these questions do make it plain the emailer hasn't read my work, at least not carefully.)

>>
So, I've updated that bit of my page, and in the process added a note about preprints:

>> Screen shot of the webpage ...
I doubt this will make a difference in the flow of email, but it will make it that much easier for me to just delete the messages I'm not replying to, without spending much energy worrying about it.

>>
I was tempted to add, but didn't: As a rule, I do not participate in discussions with people who refuse to grant me the presupposition of humanity ("Aren't people just stochastic parrots?")

/fin

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Emily M. Bender

Emily M. Bender Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

4 Nov
"Bender notes that Microsoft’s introduction of GPT-3 fails to meet the company’s own AI ethics guidelines, which include a principle of transparency" from @jjvincent on the @verge:

theverge.com/2021/11/2/2275…
@jjvincent @verge In a bit more detail, here's what Microsoft says in their blog:

blogs.microsoft.com/ai/new-azure-o…

>> Screen cap of Microsoft blo...
@jjvincent @verge The principles are well researched and sensible, and working with their customers to ensure compliance is a laudable goal. However, it is not clear to me how GPT-3 can be used in accordance with them.

>>
Read 9 tweets
28 Aug
@TaliaRinger Okay, so 1st some history. There was a big statistical revolution in the 90s, coming out of earlier work on ASR & statistical MT. By about 2003, Bob Moore (of MSR) was going around with a talk gloating about how over ~10yrs ACL papers went from mostly symbolic to mostly stats.
1/
@TaliaRinger That statistical NLP work was still closely coupled with understanding the shape of the problem being solved, specifically in feature engineering. Then (2010s) we got the next "invasion" from ML land (deep learning) where the idea was the computer would learn the features!

2/
@TaliaRinger Aside: As a (computational) linguist who saw both of these waves (though I really joined when the first was fully here), it was fun, in a way, to watch the established stats NLP folks be grumpy about the DL newcomers.

3/
Read 24 tweets
26 Aug
Talking with students & others the past few days has brought some clarity to the ways in which the LLMs & associated overpromises suck the oxygen out of the room for all other kinds of research.

1/
(To be super clear: the conversations I was having with students about this were of the form of "how do I navigate wanting to work on X and get it published, when the whole field seems to expect that I must use LLMs?")

2/
We seem to be in a situation where people building & promoting LLMs are vastly overclaiming what they can do:

"This understands natural language!"
"This can do open-ended conversation!"

3/
Read 11 tweets
25 May
This whole interview is so incredibly cringe! On top of completely evading the issue as @timnitGebru points out, the views of both employees and users painted here are frankly awful. 1/n
First, a "fun subject" -- really? Even if that was meant somewhat sarcastically, at *best* it belittles the real harm done to @timnitGebru , @mmitchell_ai (not to mention @RealAbril and other mistreated Google employees). Screen cap from WIRED article showing interviewer's question
But then check out Raghavan's reply. What does "famously open culture where people can be expressive" have to do with this story? What were they fired for, if not for being "expressive"?

3/n Screen cap of reply to question in prev tweet: "We have
Read 17 tweets
25 May
Currently reading the latest NSF/Amazon call (NSF 21-585) which goes out of its way to say interdisciplinary perspectives are crucial and that "this program supports the conduct of fundamental computer science research".
So, is interdisciplinary work on "AI" actually "fundamental computer science research"?
"The lead PI on each proposal must bring computer science expertise to the research. Computationally focused research efforts informed by socio-technical and social behavioral needs of the field are broadly encouraged." Was this thing written by a committee?
Read 7 tweets
3 May
Wow this article covers a lot of ground! Seems like a good way for folks interested in "AI ethics" and what that means currently to get a quick overview.

Draws on work by @mmitchell_ai @timnitGebru @rajiinio @jovialjoy @mathbabedotorg and many others.
>>
zdnet.com/article/ethics…
A few pull quotes & comments:
"Ethics in AI is essentially questioning, constantly investigating, and never taking for granted the technologies that are being rapidly imposed upon human life.

That questioning is made all the more urgent because of scale."
Read 20 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(