Now that "Do Transformer Modifications Transfer Across Implementations and Applications?" has been accepted to #EMNLP2021, we can finally tweet about it!

Paper 📝: arxiv.org/abs/2102.11972
Code 💾: github.com/google-researc…
Thread summary: ⬇️ (1/8)
After we published the T5 paper where we empirically surveyed many transfer learning methods to find out what works best, we decided to do something similar for Transformer architecture modifications. (2/8)
In the ~3 years since the Transformer was proposed, hundreds of architectural modifications had been proposed but almost none of them are commonly used. In other words, most Transformers people were training were largely the same as proposed in "Attention is All You Need". (3/8)
We were hopeful that by reimplementing many modifications and comparing them in typical settings (transfer learning and supervised learning), we could find some helpful modifications and combine them together to create a best-practice Transformer++ for everyone to use. (4/8)
We recruited a bunch of our colleagues and got to work reimplementing existing modifications. Then, we ran experiments on T5-style transfer learning and machine translation on WMT'14. We were surprised to find that almost none of the modifications actually helped much. (5/8)
What's more, the ones that seemed helpful increased the computational cost or size of the model in some way. So, we pivoted to instead write a paper about this surprising (and somewhat disheartening) finding. (6/8)
We can't conclude from our results that these modifications don't "transfer" to new applications and implementations. However, we tried to rule out other possibilities through some additional experiments that you can find in the paper. (7/8)
We're hopeful that our results will prompt researchers to try out new architectural ideas in more than one codebase, on more than a few tasks, and on more than a few datasets to help ensure their improvement is robust. (8/8)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Colin Raffel

Colin Raffel Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @colinraffel

1 Jun
Can your NLP model handle noooisy mEsSy #realworldtext?

ByT5 works on raw UTF-8 bytes (no tokenization!), beats SoTA models on many popular tasks, and is more robust to noise.

📜 Preprint: arxiv.org/abs/2105.13626
💾 Code/Models: github.com/google-researc…

Summary thread ⬇️ (1/9) Image
Tokenizers have many drawbacks:
- Finite, fixed vocabulary - often can't process new/unseen languages
- Lack of robustness to missspeling and n o i s e
- Not learned "end-to-end"
- Giant vocabulary matrices in the multilingual setting
- Lots of technical debt in practice

(2/9)
Operating on the raw byte sequence used to represent text (e.g. UTF-8) solves many of the aforementioned issues. The main drawback: Sequence lengths tend to increase significantly compared to using token sequences.

(3/9)
Read 9 tweets
6 May
The #ICLR2021 Workshop on Enormous Language Models (WELM) is tomorrow, May 7th!

Full info: welmworkshop.github.io
Livestream: welmworkshop.github.io/livestream/
gathertown info for ICLR registrants: iclr.cc/virtual/2021/w…

Thread summarizing the talks & panels ⬇️ (1/14)
Our first talk will be by Thomas Margoni, who will provide some legal perspective on the use of web data for training large language models. He'll touch on topics like copyright law, rights, and licenses, as they pertain to training data for LMs. (2/14)
Then, @JesseDodge will give a talk on how to document datasets and improve reproducibility of research. He'll discuss the NLP reproducibility checklist, a recent study on documenting C4, and a framework for modeling bias in data. (3/14)
Read 14 tweets
17 Dec 20
I recently have had a number of aspiring ML researchers ask me how to stay on top of the paper onslaught. Here are three concrete tips:
1) Pick a tiny subfield to focus on
2) Skim
3) Rely on your community
Thread to explain ⬇️ (1/5)
1) Pick a tiny subfield to focus on
It's impossible to stay on top of "all of ML". It's a gigantic and diverse field. Being an effective researcher requires laser-focusing on a subfield. Pick a problem that is important, excites you, and you feel you could make progress on. (2/5)
2) Skim
You'll find that many papers within your subfield of choice have a lot in common - there is often only a small nugget of novelty in each paper. It's incredibly important to develop your ability to find this nugget as quickly as possible. (3/5)
Read 5 tweets
12 Dec 19
In case you missed our #neurips poster on MixMatch (arxiv.org/abs/1905.02249) today because you aren't in Vancouver or didn't survive the poster session stampede, here's the PDF: github.com/google-researc… and here's a transcript of what I said to everyone who came by: ⬇️ 1/11
The goal in semi-supervised learning (SSL) is to use unlabeled data to improve a model's performance. Many approaches do this by using the model to produce "label guesses" for unlabeled data, and then training the model to predict those guesses. 2/11
Two common ingredients for producing label guesses are consistency regularization ("When I perturb the input or model, the model's prediction shouldn't change.") and entropy minimization ("The model should output low-entropy/confident predictions on unlabeled data.") 3/11
Read 11 tweets
24 Oct 19
New paper! We perform a systematic study of transfer learning for NLP using a unified text-to-text model, then push the limits to achieve SoTA on GLUE, SuperGLUE, CNN/DM, and SQuAD.
Paper: arxiv.org/abs/1910.10683
Code/models/data/etc: git.io/Je0cZ
Summary ⬇️ (1/14)
Our approach casts *every* language problem as a text-to-text task. For example, English-to-German translation -- input: "translate English to German: That is good." target: "Das ist gut." or sentiment ID -- input: "sentiment: This movie is terrible!", target: "negative" (2/14)
The text-to-text approach allows us to use the same model, loss function, decoding process, training procedure, etc. across every task we study. It also provides a standard testbed for the many ideas we evaluate in our empirical survey. (3/14)
Read 14 tweets
19 Sep 19
If you are reeling from a NeurIPS rejection or stressing about an ICLR submission, remember that some of the best papers were never published anywhere except arxiv. Thread of a few favorites (1/5):
"Generating Sequences with RNNs" by Graves arxiv.org/abs/1308.0850 This paper blew my mind when it came out, showing that it was possible to generate plausible text and handwriting with RNNs. Includes the predecessors of attention, Adam, etc... (2/5)
WaveNet by van den Oord et al. arxiv.org/abs/1609.03499 Until this came out I don't think most of us expected that we'd be able to generate raw waveforms with deep networks anytime soon. The results were surprisingly good and the architecture remains influential. (3/5)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(