LSA 2003, Newmeyer's keynote. I was a few years in to my academic job search and attending the conference with my 9 month old son and my mom to look after him.

>>
Mom was pushing the baby around the hotel ballroom level hallways in the stroller trying to keep him happy, but he was NOT happy and I could hear him.

>>
I got up from my seat in the standing-room-only crowd to go check on him, and when I came back, ended up standing at the back of the room, next to Andrew Garrett, who I knew from the one-year stint I did at Cal in 2000-2001.

>>
Newmeyer's topic was "Grammar is grammar and usage is usage" and I forget the exact details, but Garrett said to me: "Isn't someone like you or Dan Jurafsky going to get up there and...?"

>>
I was so pleased to have been put in the same league as Jurafsky that I figured I just had to go ask a question. And I was actually much better positioned to get to the mic to line up than if I hadn't gotten up to go check on my kiddo.

>>
I forget exactly what I asked, but it must have been something to do with it being an empirical question whether linguistic competence (knowledge of language, as stored in actual brains) really did only concern grammaticality or not.

>>
Newmeyer's answer involved Occam's Razor to which I got to reply "But Occam's Razor cuts both ways", much, as I remember it, to the audience's approval. :)
Thank you, @wtimkey8 for starting this new version of the thread. The other one was so awful to read and so hard to look away from....
(Hmm not keynote, but Presidential Address. Not that that really matters...)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Emily M. Bender

Emily M. Bender Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @emilymbender

Mar 16
Reading the linked blog post, it's all kinds of creepy. Just a couple of examples:
First, I'm super skeptical that learning math by doing problem sets is a good model for learning other kinds of things. And even if it were, the idea that LLMs would support that generalization seems super sketchy. "Imagine you’re a stud...
What, specifically, is the system doing to get the student "unstuck" in their non-math assignment? What role does the LLM play? How does the way that LLMs absorb various societal biases from their training data affect performance?
Read 7 tweets
Mar 15
Yes, this is great! I'm sorry we didn't find your paper while writing ours. (cc @chirag_shah)

A few favorite quotes & one quibble:
@chirag_shah Potthast et al (from @webis_de) suggest a standard disclaimer on direct answer responses, which is very well put:

“This answer is not necessarily true. It just fits well to your question.”

dl.acm.org/doi/abs/10.114…

>>

>>
@chirag_shah @webis_de Also Potthast et al: "As no actual conversations are currently supported by conversational search agents, every query is an ad hoc query that is met with one single answer."

No. Actual. Conversations.

There's a whole study to be done on the perils of aspirational tech names.>>
Read 6 tweets
Mar 14
Trying to attend "Conversational Information Seeking: Theory and Evaluation (Session 1)" at #CHIIR2022, but the Zoom link in the conference room in Gather isn't working. Anyone have a clue?
Also, there don't seem to be any papers listed in that session, nor "Conversational Information Seeking: Theory and Evaluation (session 2)" this afternoon. Maybe these are just phantom calendar entries? #CHIIR2022 what's going on?
I find it very nerve wracking when the interfaces to online conferences are unclear ... like I'm meant to be somewhere, but I can't figure out where, nor can I figure out why I can't figure it out. Also, no helpdesk that I can see, so no one to ask... #chiir2022
Read 4 tweets
Mar 14
Looking forward to #chiir2022 this week and to presenting "Situating Search", with @chirag_shah
dl.acm.org/doi/10.1145/34…
1/
@chirag_shah In this #chiir2022 perspective paper we argue that using language model driven conversation agents (e.g. LaMDA) for search is flawed both technically and conceptually.
2/
Technical flaws include the fact that the language models aren't designed to perform "reasoning" (despite wild claims, such as Metzler et al 2021 referring to their "reasoning-like capabilities"). See also Bender & Koller 2020:
aclanthology.org/2020.acl-main.…

3/
Read 8 tweets
Mar 10
Some thoughts from @GaryMarcus on overclaims around deep learning (and how things could be different). A few reactions from me:

nautil.us/deep-learning-…

1/
@GaryMarcus First, I had a good giggle at the thought of someone trying to implement something like the copy/paste functionality of an OS via deep learning. How frustrating would it be for some non-trivial % of the time for the info being pasted to be randomly different?

2/
@GaryMarcus Second, I disagree that photo labeling (i.e. assigning words to images) is necessarily "low stakes". Real harm can come from such systems, when the (photo, label) pair reproduces racism & other -isms. Key work here by @Abebab and colleagues:

arxiv.org/abs/2110.01963

3/
Read 12 tweets
Feb 1
This is a very thoughtful reflection by @zephoria --- and it's striking to be offered so much inside info about the CTL/Loris debacle --- but it also doesn't fully connect the dots. A few thoughts/questions:

zephoria.org/thoughts/archi…
@zephoria boyd steps the reader through how the organization went from handling data for continuity/quality of service to texters (allowing someone to come back to a conversation, handoffs between counselors, connection to local services) to using data for training counselors to >>
@zephoria using data for internal research for using data for vetted research by external partners as one thread. That last step feels rickety, but still motivated by the organization's mission and the fact that the org didn't have enough internal resources to all the beneficial research.
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(