You've probably seen results showing impressive few-shot performance of very large language models (LLMs). Do those results mean that LLMs can reason? Well, maybe, but maybe not. Few-shot performance is highly correlated with pretraining term frequency. arxiv.org/abs/2202.07206
We focus on numerical reasoning (addition, multiplication, and unit conversion). We use the same formats and tasks used previously to show impressive few-shot performance, but we systematically evaluate every number and correlate performance with pretraining term frequency.
For example, a model that "knows" how to multiply should have similar performance multiplying 23*X and 24*X, for various X. We evaluate GPT-J on Y*X, for Y in [0, 100] and X in [1, 50], and plot average accuracy against Y's frequency in Pile (thanks #EleutherAI!).
The correlation is striking. The effect remains as we increase the number of shots, vary the model size, evaluate accuracy on other tasks, and count co-occurrences of X, Y, and/or Z (the correct answer), not just unigram statistics.
How do we interpret these results? It seems unlikely that a model that has this dependence on pretraining term frequency is doing "reasoning", but it's hard to state that unequivocally because "reasoning" and "memorization" are not well defined.
Is there a multiplication algorithm in the model's weights that just doesn't get good enough embeddings for less frequent words? Maybe that would be the beginnings of "reasoning".
Or is it somehow just regurgitating associations from pretraining data at test time? We are not looking at exact train/test overlap, however, only training frequency of terms from test instances. We'd need to be able to peer inside the black box to answer these questions.
At the least, this analysis should give us caution when looking at few-shot performance. It seems impossible to properly interpret any few-shot benchmark result without reference to a model's pretraining data.
The paper is by collaboration with @rloganiv, @nlpmattg, and @sameer_. And, thanks to @AlexTamkin for the shoutout earlier today: .

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Yasaman Razeghi

Yasaman Razeghi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(