LSA 2003, Newmeyer's keynote. I was a few years in to my academic job search and attending the conference with my 9 month old son and my mom to look after him.
Mom was pushing the baby around the hotel ballroom level hallways in the stroller trying to keep him happy, but he was NOT happy and I could hear him.
>>
I got up from my seat in the standing-room-only crowd to go check on him, and when I came back, ended up standing at the back of the room, next to Andrew Garrett, who I knew from the one-year stint I did at Cal in 2000-2001.
>>
Newmeyer's topic was "Grammar is grammar and usage is usage" and I forget the exact details, but Garrett said to me: "Isn't someone like you or Dan Jurafsky going to get up there and...?"
>>
I was so pleased to have been put in the same league as Jurafsky that I figured I just had to go ask a question. And I was actually much better positioned to get to the mic to line up than if I hadn't gotten up to go check on my kiddo.
>>
I forget exactly what I asked, but it must have been something to do with it being an empirical question whether linguistic competence (knowledge of language, as stored in actual brains) really did only concern grammaticality or not.
>>
Newmeyer's answer involved Occam's Razor to which I got to reply "But Occam's Razor cuts both ways", much, as I remember it, to the audience's approval. :)
Thank you, @wtimkey8 for starting this new version of the thread. The other one was so awful to read and so hard to look away from....
(Hmm not keynote, but Presidential Address. Not that that really matters...)
• • •
Missing some Tweet in this thread? You can try to
force a refresh
First, I'm super skeptical that learning math by doing problem sets is a good model for learning other kinds of things. And even if it were, the idea that LLMs would support that generalization seems super sketchy.
What, specifically, is the system doing to get the student "unstuck" in their non-math assignment? What role does the LLM play? How does the way that LLMs absorb various societal biases from their training data affect performance?
@chirag_shah@webis_de Also Potthast et al: "As no actual conversations are currently supported by conversational search agents, every query is an ad hoc query that is met with one single answer."
No. Actual. Conversations.
There's a whole study to be done on the perils of aspirational tech names.>>
Trying to attend "Conversational Information Seeking: Theory and Evaluation (Session 1)" at #CHIIR2022, but the Zoom link in the conference room in Gather isn't working. Anyone have a clue?
Also, there don't seem to be any papers listed in that session, nor "Conversational Information Seeking: Theory and Evaluation (session 2)" this afternoon. Maybe these are just phantom calendar entries? #CHIIR2022 what's going on?
I find it very nerve wracking when the interfaces to online conferences are unclear ... like I'm meant to be somewhere, but I can't figure out where, nor can I figure out why I can't figure it out. Also, no helpdesk that I can see, so no one to ask... #chiir2022
@chirag_shah In this #chiir2022 perspective paper we argue that using language model driven conversation agents (e.g. LaMDA) for search is flawed both technically and conceptually.
2/
Technical flaws include the fact that the language models aren't designed to perform "reasoning" (despite wild claims, such as Metzler et al 2021 referring to their "reasoning-like capabilities"). See also Bender & Koller 2020: aclanthology.org/2020.acl-main.…
3/
@GaryMarcus First, I had a good giggle at the thought of someone trying to implement something like the copy/paste functionality of an OS via deep learning. How frustrating would it be for some non-trivial % of the time for the info being pasted to be randomly different?
2/
@GaryMarcus Second, I disagree that photo labeling (i.e. assigning words to images) is necessarily "low stakes". Real harm can come from such systems, when the (photo, label) pair reproduces racism & other -isms. Key work here by @Abebab and colleagues:
This is a very thoughtful reflection by @zephoria --- and it's striking to be offered so much inside info about the CTL/Loris debacle --- but it also doesn't fully connect the dots. A few thoughts/questions:
@zephoria boyd steps the reader through how the organization went from handling data for continuity/quality of service to texters (allowing someone to come back to a conversation, handoffs between counselors, connection to local services) to using data for training counselors to >>
@zephoria using data for internal research for using data for vetted research by external partners as one thread. That last step feels rickety, but still motivated by the organization's mission and the fact that the org didn't have enough internal resources to all the beneficial research.