I'm happy to announce that our paper "The Devil is in the Detail: Simple Tricks Improve Systematic Generalization of Transformers" has been accepted to #EMNLP2021!

paper: arxiv.org/abs/2108.12284
code: github.com/robertcsordas/…

1/4
We improve the systematic generalization of Transformers on SCAN (0 -> 100% with length cutoff=26), CFQ (66 -> 81% on output length split), PCFG (50 -> 85% on productivity split, 72 -> 96% on systematicity split), COGS (35 -> 81%), and Mathematics dataset.

2/4
We achieve these large improvements by revisiting model configurations as basics as the scaling of embeddings, early stopping, relative positional embedding, and weight sharing (Universal Transformers).

3/4
We also show that relative positional embeddings largely mitigate the EOS decision problem.

Importantly, differences between these models are typically invisible on the IID data split. This calls for proper generalization validation sets.

4/4

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Csordás Róbert

Csordás Róbert Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(