Profile picture
hardmaru @hardmaru
, 4 tweets, 2 min read Read on Twitter
“Our sense of progress largely rests on a small number of standard benchmarks such as CIFAR-10, ImageNet, or MuJoCo. This raises a crucial question: How reliable are our current measures of progress in machine learning?” 🔥
I would love to see a similar study for text and translation, and see how well network architectures that are “hyper” optimized to death for PTB, wikitext, enwik8, WMT’14 EN-FR, EN-DE, etc. transfer to “new” test sets from same distribution, or even perhaps to other text domains.
If we see similar results for PTB, then this is actually quite good news for deep learning research community, where the typical process of hyper optimizing for performance on a smallish dataset *does* lead to discovery of new methods that generalize better.
I would also be interested to see how well AutoAugment performs for the new test set.
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to hardmaru
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member and get exclusive features!

Premium member ($3.00/month or $30.00/year)

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!