@LakeBrenden opts instead for a qualified "yes" on the general question, and a strong "yes" on the more specific question – Do LLMs need sensory grounding to understand words as people do?
Another day, another opinion essay about ChatGPT in the @nytimes. This time, Noam Chomsky and colleagues weigh in on the shortcomings of language models. Unfortunately, this is not the nuanced discussion one could have hoped for. 🧵 1/
For a start I'm not sure the melodramatic tone serves the argument: "machine learning will degrade our science and debase our ethics", and "we can only laugh or cry at [LLM's] popularity"! I know op-eds are often editorialized for dramatic effect, but maybe this is a bit much? 2/
The substantive claims are all too familiar: LLMs learn from co-occurrence statistics without leveraging innate structure; they describe and predict instead of doing causal inference; and they can't balance original reasoning with epistemic and moral constraints. 3/
I don't think lossy compression is a very helpful analogy to convey what (linguistic or multimodal) generative models do – at least if "blurry JPEGs" is the leading metaphor. It might work in a loose sense, but it doesn't tell the whole story. 1/
Generative models can definitely be used for lossy compression (see below), but that's a special case of their generative capabilities. Reducing all they do to LC perpetuates the idea that they just regurgitate approximations of their training samples. 2/
This bit about interpolation strikes me as particularly misleading. Inference on generative models involves computations that are way more complex and structured than (say) nearest neighbor pixel interpolation in image decompression. 3/
Can you reliably get image generation models like DALL-E 2 to illustrate specific visual concepts using made-up words? In this new preprint, I show that you can, using new approaches for text-based adversarial attacks on image generation. 1/12
Image generation models are typically trained on multilingual datasets (even accidentally). The paper introduces "macaronic prompting", a method to concatenate chunks from synonymous words in multiple languages to design nonce strings that can reliably query visual concepts. 2/12
For example, the word for “birds” is “Vögel” in German, “uccelli” in Italian, “oiseaux” in French, and “pájaros” in Spanish. Concatenate subword tokens from these words and you get strings like “uccoisegeljaros”, which reliably prompt DALL-E to generate images of birds. 3/12
Are large pre-trained models nothing more than stochastic parrots? Is scaling them all we need to bridge the gap between humans and machines? In this new opinion piece for @NautilusMag, I argue that the answer lies somewhere in between. 1/14
While LPT models are undeniably impressive, many researchers have rightfully warned that we shouldn't jump to conclusions about how similar they are to human cognition. The recent LaMDA story is yet another cautionary tale about our natural tendency for anthropomorphism. 2/14
Here we go again! Parti, a new text-to-image model from @GoogleAI, drops contrastive learning and diffusion in favor of good old seq-to-seq autoregression. Results shared in the paper seem state-of-the-art for complex compositional prompts, although some failure modes remain.
@GoogleAI "A portrait photo of a kangaroo wearing an orange hoodie and blue sunglasses standing on the grass in front of the Sydney Opera House holding a sign on the chest that says Welcome Friends!"
"A map of the United States made out of sushi. It is on a table next to a glass of red wine."
This is a nice and fairly exhaustive overview of the potential harms of LLMs, but in my opinion it's still missing a more indirect risk that pertains to human-human online interactions. 1/
As LLMs become easier and cheaper to use for companies and individuals, LLM-based (chat)bots will become commonplace online. I worry that this could eventually threaten to degrade human communication, by making people increasingly suspicious that they are talking to machines. 2/
Over time, this might prop up a new form of argumentative fallacy we could call "reductio ad machinam": refusal to engage in good faith with someone online on the grounds that they could be a text-generation algorithm. 3/