Tal Linzen Profile picture
Tweets about language, computers and minds, faculty @nyuling and @NYUDataScience, he/him. Emails are better than DMs.
Oct 27, 2022 11 tweets 4 min read
Today's 🧵!

A lot of recent work in psycholinguistics and cognitive neuroscience appears to assume strong convergence between human predictions during sentence comprehension and the predictions of neural language models. This seems too strong - LMs are trained using different objective functions from humans:
Mar 26, 2021 6 tweets 1 min read
Due to popular demand (ok just @talyarkoni but he's a big account) here's my shitposting pledge: WHEREAS, it is impossible to discuss anything seriously on Twitter even in the best of times;
Jan 25, 2021 12 tweets 3 min read
My DMs are exploding with requests for language model bigness takes and I'm happy to oblige: The scaling laws paper (Kaplan et al arxiv.org/abs/2001.08361) shows that, *for a particular neural network architecture* (transformers), increasing the number of parameters improves language model performance, and so does increasing the training corpus size.
Jan 24, 2021 5 tweets 1 min read
The review that Marco Baroni and I wrote on syntax in neural networks and why linguists should care is now "officially published" annualreviews.org/doi/full/10.11… I did most of the writing on this article in late March 2020 in New York City, which was an extremely bizarre context to be writing in, but I had a co-author who feels very strongly about deadlines, so.
Jul 27, 2018 43 tweets 14 min read
I'm going to try to live tweet Matt Botvinick's (DeepMind) keynote talk at #CogSci2018. Wish me luck: The title of the talk is "Nature and Nurture in AI". Botvinick starts by surveying the developments in AI that excited him and made him join DeepMind.