I simultaneously have chatGPT-fatigue and also think about it every day - it truly changed the discourse around LLMs. Plus, chatty-P provided us with _so_ many memes. The following is my current mental model of chatGPT (now popularly synonymous to LLMs) and what might be missing.
Contrary to what we sometimes read on twitter, language is meaningfully different from other types of communication; it's a complex system of symbolic reference.
How? Symbols relate to each other in a system; allowing to describe each in terms of others; they don't need to co-occur with physical events (we can talk about democracy and the future); and once a new one is acquired the relational system allows generalisation to novel context.
With lots of effort, we can teach chimpanzees a similar form of symbolic system (a language). Even though lots of non-human animals have incredibly complex forms of communication, language does not _naturally_ show up in any of them.
LLMs acquired a noisy system of symbols from inordinate amounts of data and speak fluently. This is really cool (and scary, thanks safety ppl for your work 🙇♀️) and something that many didn't think possible years ago.
But base LLMs are exceedingly toxic. (RL)HF aligned them (somewhat) to polite modes of speaking (at a surface level subject to T&Cs and caveats etc etc etc don't believe anything chatty-P says basically but then again also don't believe anything I say).
There's still lots of failure modes and as of yet, I think chatGPT won't pass the true Turing test (if the judge knows what they're doing). Although an impressive story-teller and, at times, hilarious, chatGPT can be glaringly incoherent. We've all seen examples of it.
I think an important missing component is #agency (setting goals and achieving them). There are levels to agency. Lizards can find food under uncertainty. A monkey sets diverse goals and achieves them under more uncertainty. We are at the top of this self-defined agency-pyramid.
When we speak, there's complex underlying intentions and beliefs at play. These intentions are what make us mostly coherent and give our language meaning. LLMs simulate this agency to some extent (arxiv.org/abs/2212.01681), but are no agent and hence show incoherence sometimes.
(RL)HF might make this seemingly better, because the LLMs seem to more coherently adhere to a smaller set of beliefs (be polite! be helpful! don't be racist!), but it's not a fundamental change to the fact that there is no agency, and hence no coherence or "meaning".
Language might require something different (to be added! with all the DL!! I don't hate chatty-P trust me!!!). I don't know if embodiment is part of this picture or whether agency can exist separately, but I'm excited to think about it because actually I don't know what agency is
Sources of inspiration:
- Ape language: From conditioned response to symbol by Savage-Rumbaugh
- The Symbolic Species by Deacon
- Symbolic Behaviour in AI by Santoro, Lampinen et al
- The Evolution of Agency by Michael Tomasello
- Language models as agent models by Andreas
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Lots of people have been sending me implicatures we used as examples in our paper that #chatGPT understands (i.e. explains well when prompted). So cool! Happy to see people interested in this. I wanted to write a short thread about my thoughts of what this means. ⬇️
E.g. left is chatGPT, right is Davinci-2
Before we started the paper, I would try lots of implicatures I came up with myself on Davinci-2, in different wordings, with moderate success. Some always solved, some half of the time depending on the wording (meaning random performance, since the test is a binary yes/no one).
Some of the best #advice I got early in my PhD was from @rockt urging me to set up a good note-keeping system. 1 year in my notes have helped me in so many situations! It took a few iterations, and I'll share the things that were most helpful for me personally here ⬇️🐊 1/13
Everything below I use @obsdmd for. I love how it's simply a markdown editor and file-structure visualiser at first but you can add all the plugins you need to make it ever more complex.
I fill out the same note template for every paper I read. It asks for metadata, content tags, a personal brief thought about the paper, etc. I make sure to write a tl;dr in my own words after reading. DM me if you want the template :) (it adjusted from @y0b1byte's template)