Why does model often attend to salient words even though it's not required by the training loss? To understand this inductive bias we need to analyze the optimization trajectory🧐

Sharing our preprint "Approximating How Single Head Attention Learns" #NLProc
We approximate with 2 stages: early in training when attentions are uniform, the model learns to translate individual input word `i` to `o` if they co-occur frequently. Later, the model learns to attend to `i` while the correct output is o because it knows `i` translates to `o`.
All approximations are "wrong" (and apparently reviewers do not like our assumptions), but we are able to explain many existing empirical phenomena as well as predicting new ones: with our theory, we construct a distribution that is easy to express but hard to learn.
Takeaways: to understand many interesting properties of neural network, we not only need to understand the expressiveness of the models and the already trained models, we also need to understand the optimization trajectory!
Paper: arxiv.org/pdf/2103.07601… , joint work with @sea_snell , Dan Klein, and @JacobSteinhardt

REJECTED by EMNLP 2020, NAACL 2021, and #EMNLP2021 , but I love it more than most of my prior accepted works :) Time will tell its impact.
Caveat: this theoretical framework only captures some (important) aspects of the system, and is far from a perfect approximation of what happened

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Ruiqi Zhong

Ruiqi Zhong Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @ZhongRuiqi

30 Aug
We can prompt language models for 0-shot learning ... but it's not what they are optimized for😢.

Our #emnlp2021 paper proposes a straightforward fix: "Adapting LMs for 0-shot Learning by Meta-tuning on Dataset and Prompt Collections".

Many Interesting takeaways below 👇
1. Prompting a language model out of the box can be highly suboptimal. For example, GPT-3 (175B parameters) gets 80% on SST-2 zero-shot, while UnifiedQA (700M) get 92% 🤔 so even being adapted to generic question answering can make a 200x smaller model better ...
2. We fix this by directly fine-tuning the model to produce the desired output given the task description and the task inputs. To get the training data, we unified datasets from 43 different sources into the same QA format and wrote 441 task descriptions in total *on our own*.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(