A thread on our latest optimizers work! We tune Nesterov/Adam to match performance of LARS/LAMB on their more commonly used workloads. We (@jmgilmer, Chris Shallue, @_arohan_, @GeorgeEDahl) do this to provide more competitive baselines for large-batch training speed measurements Image
We are **not** trying to prove that any optimizer is better than any other (more on that later). However, we believe that well tuned baselines are very important, especially in optimization where there are so many confounding factors.
In general, we found that LR schedules were the most impactful part of the pipeline to tune (that is, once we got all the model and dataset details to be the same!).
LARS has been tuned for years in MLPerf, so tuning the LR schedules for other algorithms is critical. We found different LR schedule shapes work better for different optimizers, meaning just using the same schedule in the baseline could give suboptimal/misleading results Image
Similarly, after fixing bugs in existing Adam baselines for BERT-Large and re-tuning hyperparameters including schedule, we set new stronger baselines for batch sizes up to 65k
We tuned *a lot*, and report excruciating amounts of detail in appendices. We recommend others release these intermediate results (regardless of how silly) as it is hard to a priori determine which choices one should consider when tuning, and these results show our decision path Image
We’ve also released #JAX ResNet-50 code at github.com/google/init2wi…, to more easily reproduce our results. We believe this is important because without the MLPerf reference code we would never have realized how impactful several implementation differences are.
One consideration of optimizer studies we did not consider was “ease of tuning”. We believe this is still an open question, which will require extremely careful consideration and study, that we leave for future work 😀
Comparing optimizers is hard, but in collaboration with researchers in academia and industry, we’re developing a competition to measure optimization training speeds as a new MLCommons working group for algorithmic efficiency mlcommons.org/en/groups/rese…
A link to the arxiv preprint! arxiv.org/abs/2102.06356

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Zachary Nado

Zachary Nado Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(