🧵Been reading several recent arXiv entries claiming planning capabilities of #LLM's. This area is so full of anthropomorphisms--"Chain of Thought Prompting", "Inner Monolog"--that it cries out for a cleansing read of Drew's #AI meets Natural Stupidity 1/
One popular line claims that while LLM's may give wrong plans, they can improve with right prompting (which, in one case, is claimed to even induce "inner monolog" all West World host-like).
The prompts in the appendices however seem to suggest a Clever Hans effect in action 2/
Clever Hans, btw, is the ChatHorseGPT of its time--that showed very un-horselike arithmetic prowess by stomping its foot the right number of times, as long as the questioner was in front and knew the answers. Crucially, no fraudulent intent was needed. 3/ en.wikipedia.org/wiki/Clever_Ha…
Coming back to LLMs, most successful examples of iterative prompting in action, not surprisingly, involve cases where the questioner does know (or can validate using an external simulator)--the LLM's answer. 4/
It may be fine to give credit to LLMs for finally getting the desired result if the solution criterion is amorphous (e.g. the kind of essay/email you want). Not so much for planning and reasoning where there are well defined notions of correctness! 5/
It doesn't help that underneath the hood, many prompting strategies tend to be very problem specific--thus allowing for unwittingly giving useful debugging hints to the LLM.
If you are giving debugging hints, you are an external solver for the LLM! 6/
I am *not saying* that the people advocating iterative prompting for planning/reasoning are knowingly cheating. Like the owner of Clever Hans, the prompters of GPT3 don't necessarily have to set out to give it sneaky help--and yet give it anyway. 7/
A related issue is the confusion in some of the papers about what makes planning hard. For example, the "Inner Monolog" paper says it uses a "3 block stacking" benchmark, but qualify that they "pre-place" two of the blocks in the correct configuration already. 8/
Well, the combinatoric search aspect of planning comes because of the *subgoal interactions*! If you have three blocks to stack, and put two of them in the correct configuration already, there are no interactions to begin with.. 9/
No wonder then that I have to give deliberate debugging help to get the LLMs--be it ChatGPT (see 5 above) or Bard (which I finally played with today👇 ) to finally blurt out the correct solution for the Sussman Anomaly.. making me wonder who *actually* solved the problem.. 10/
..and I already wrote elsewhere, generating general blocks world plans directly is still well beyond the reach even of GPT4 (@TheEconomist claims notwithstanding..) 11/
(This thread is as much for my students, who are sometimes mad at me that I keep poking holes in their attempts to join the gold rush of chain-of-thought gangs..) 12/
• • •
Missing some Tweet in this thread? You can try to
force a refresh
So @TheEconomist tells me now that #LLMs can do planning and reasoning after all. Obviously our own dismal experience of their planning performance (c.f. the 🧵 at
) must be a clear outlier.. 🙄 Thank goodness I pay big bucks for my subscription.. 1/
Interestingly, I was just telling someone today how several of the papers on "LLMs for Task Planning by Prompting" are rife with the Clever Hans effect (c.f. en.wikipedia.org/wiki/Clever_Ha… ). I guess I will have to do a thread.. 2/
(While we should all be used to #LLM hype-expertise in the press by now, this particular case was prickly as it is my cocky son who airily pointed this article out to me at dinner with barely concealed delight.. 😡 ) 3/
In bemoaning how things are getting worse everyday, we often tend to forget that the state of the world is becoming monotonically more observable. 1/
It may not be so much that there is monotonically increasing suffering in this world, but that it is monotonically more observable--we can be aware of it, if we choose to. 2/
Wars become forever stalemates because both parties have much better observability into the state of the adversary. As my son says, Normandy-like surprise attacks are much harder in this era of satellite/cell signal observability (Ukraine being a textbook case in point..) 3/
The impressive deep pattern recognition abilities of #DNN's such as #LLM's are sometimes confused for reasoning abilities
I can learn to guess, with high accuracy, whether a SAT instance is satisfiable or not, but this not the same as knowing how to solve SAT. Let me explain. 1/
Suppose you train a learner with a large number of Boolean 3-SAT instances labeled with whether or not they are satisfiable. There is no reason to doubt that a modern #DNN-based leaner will manage to learn deep features corresponding to the γ ratio-- #clauses/#variable .. 2/
..and armed with γ, it can also essentially figure out the sharp-threshold phenomenon w.r.t. to γ, and should be able to predict with high certainty that the γ < 4.3 are satisfiable and γ > 4.3 are unsatisfiable. 3/
There seems to be an almost willful confusion about the need and role for explainability of #AI systems on #AI twitter.
Contrary to the often polarizing positions, it is neither the case that we always need explanations nor is it the case that we never need explanations. 🧵1/
We look for explanations of high level decisions of (what for us are) explicit knowledge tasks; and where contestability and collaboration are important.
We rarely look for explanations of tacit knowledge/low level control decisions. 2/
I don't need explanation on why you see a dog in a picture; why you put your left foot 3 mm ahead of your left, or why facebook recommends me yet another page.
I do want one if am denied a loan, or I need a better model of you so I can coordinate with you. 3/
Our benchmark tasks (prompts) are posed in the context of common "toy domains" used in automated planning, and are small enough to not involve any huge combinatorics. In particular, they should be accessible to lay humans. 2/
If these results seem contrary to the optimism surrounding #LLM's emergent reasoning abilities, (e.g. logical & ethical judgements), we think it may be because those benchmarks correspond to very shallow reasoning that can more easily be mimicked from previous patterns. 3/