We recently tried Shampoo compared to a tuned ensemble of Adam and SM3 at @HomebrewNLP and found that the hyperparameter search space contains many more "winning tickets," which also achieve lower losses!
To be precise, while SM3 trained 7 (0.36%) models to a loss below 1.46, Shampoo achieved that with 255 (11.5%) models. Additionally, the lowest loss is 3.5% lower, which is equivalent to training a 3x bigger model with 3x more data, according to chinchilla's scaling laws.
Unfortunately, this convergence improvement does not come for free. Computing a Shampoo-Update incurs significant overheads as it must compute a matrix inverse for every parameter. Fortunately, the official implementation does this less frequently.
For brevity, ours does not:
However, shampoo trains faster than the baseline even when inverting the parameter matrix at every update. Additionally, increasing the batch size from 16 to 256 already reduces the overhead from 25% to 4.1%, so there's no need to worry.
Most importantly, shampoo increases the range of "good" hyperparameters. This way, you need to worry about one less hyperparameter when starting a new project.
Looking at the plot below, it seems as if shampoo accepts virtually any configuration and returns a great model.
Lastly, I'd like to thank TensorFork and the TPU Research Cloud for funding this project, as the sweeps above used over 85000 (preemptible) TPU-core hours. If you'd like to learn more about them, have a look at my previous thread:
Above, I only showed _that_ Shampoo works but didn't explain how it achieves these massive improvements.
Luckily, @_arohan_ wrote a detailed thread explaining the inner workings and related work:
In a paper review, @ykilcher also explained one of the critical components that make Shampoo work: Optimizer Grafting
I'd definitely recommend checking it out:
Unlike AdamW, TGAdam performs well across a wide range of hyperparameters. Additionally, it can significantly outperform the baseline (MNIST+LR=0.1) with minimal tuning.
Below, you can see the aggregated results of over 6986 runs across architectures and datasets:
Large-scale tests on ImageNet or GPT are still outstanding, so take these results with a pile of salt.
However, these results don't come from anywhere. In fact, TGAdamW is theoretically well-motivated.
We tried Shampoo with a few more settings and compared it against AdamW as that's more common than SM3.
TL;DR: Shampoo still is better, but Shampoo#AdamW > AdamW
To go into a bit more detail:
The best pure Adam(W) outperforms the previous best (SM3#Shampoo) by 9.1%.
This is likely caused by our model's significant architectural changes as we switched from Attention to Bottleneck-Convolution+RNN. For Attention, SM3 might still be better.
Interestingly, looking at Adam vs. Adam#Shampoo, it'd appear that the previous benefits vanished entirely. Ths loss difference between these two dropped to 1.35% compared to the previous 3.5% lower loss:
OpenAI just released a Video-GPT ("VPT") that "solved" Minecraft.
Below, we'll take apart their model to the point where we can start reproducing it.
If you're interested in training this on "the world," join our discord server: discord.gg/24WsKDsV6w
Let's start with their architectural description.
The core of their system has three parts: 1) "Data Cleaning": web-scale scraping and filtering 2) "IDM": a BERT-like model to generate data 3) "VPT": a GPT trained on Video
1) Data Cleaning
As with most web-scale datasets, some cleaning has to be done to ensure the model won't be cleaned on unethical inputs such as Minecraft Swastikas. Additionally, they decided to remove hard-to-learn inputs like Facecams and overlays to improve training efficiency
"Sparse is Enough in Scaling Transformers", a recent paper by Sebastian Jaszczur from Google Research, shows 40x speedups at inference using structured sparsity without reducing downstream performance.
Note that, above, the loss plot is not an official image from the paper. Instead, the authors published all of their runs on a public tensorboard: tensorboard.dev/experiment/on3….
This way, we can compare the results ourselves.
2/22
For example, it's a little suspicious how well their "sff64" model performs, considering that "sff32" and "sff128" both underperform the baseline significantly.
So let's try to understand what's going on.
It is incorrect and causes unnecessary harm to the authors of "PoolFormer: MetaFormer is Actually What You Need for Vision" (arxiv.org/abs/2111.11418).
Using just AvgPool and MLP, they outperform most models.
They added a comparison with "ResNet strikes back" (arxiv.org/abs/2110.00476) on GitHub (github.com/sail-sg/poolfo…), showing how they outperform ResNet+ by training PoolFormer with DeiT's augmentations.
2/6
The most incredible part about all of this is that they effectively run
x - LayerNorm(x) + AvgPool(LayerNorm(x))
as a token mixing method, instead of expensive and difficult to scale convolutions or self-attention.
This speedup is almost as significant as Switch Transformer's (arxiv.org/abs/2101.03961). It got up to 7x speedups using 64x as many (sparse) parameters.
Primer, however, doesn't use more parameters. It's also orthogonal to Switch, so a combined 32x speedup seems plausible.
There's just one slight issue: The baseline.
Primer compares itself with a default transformer and has no ablations of individual changes.
Instead, they trained a standard 2B GPT3-XL for 2 trillion tokens, spending well over $1,000,000 on this one figure.