Simon Batzner Profile picture
Research at Google DeepMind. In a previous life created Equivariant Interatomic Potentials.

Dec 17, 2021, 14 tweets

πŸš€πŸš¨ Equivariance changes the Scaling Laws of Interatomic Potentials πŸš¨πŸš€

We have updated the NequIP paper with up to 3x lower errors + (still) SOTA on MD17

Plus: equivariance changes the power-law exponent of the learning curve!

arxiv.org/abs/2101.03164…

πŸ‘‡πŸ§΅ #compchem #GNN

Learning curves of error vs training set size typically follow a power law: error = a * N^b, where N is the number of training samples and the exponent b determines how fast a method learns as new data become available.

Interestingly, it has been found that different models on the same data set usually only shift the learning curve, but do not change the power-law exponent, see e.g. [1]

[1] arxiv.org/abs/1712.00409

The same trend is observed on MD-17: different models all share the same slope and better models only shift the learning curve down, but don't change the slope. This holds across kernels/deep NNs + different descriptors, they all share approximately the same slope.

NequIP completely breaks the pattern! We observe a larger slope than all other methods. That means NequIP learns faster as new data become available. As discussed, this is against common wisdom.

Now for the interesting part: when we turn off equivariance in NequIP and make it invariant, we recover the same slope that all other models observe! (reminder: an invariant l=0 NequIP network is equivalent to SchNet, i.e. only scalar features and only scalar interactions)

Further increasing the irreps order l in NequIP to l={2, 3} again only shifts the learning curve, but does not increase the slope.

This isolates the effect of equivariance as the culprit for this change in exponent. We also ran versions of these learning curves that control for the fact that the equivariant network has more weights and more hidden features (see appendix). In both cases, the effect holds.

We show this on two different systems: water and aspirin from MD17, in both cases the same effect is observed (plot below is water). The trend also holds on energies in the case where they are trained together with forces (i.e. a typical ML Potential training).

Besides learning curves, we also added new data with higher-order (l>1) tensors on all systems + energy errors. We also ran an explicit l-scan and see a consistent improvement from using l>1 tensors. There is a trade-off on memory/accuracy, but if you can afford them, use them!

We also compare MD17 and revMD17 and find in accordance with work from @AndersSChristen+@ProfvLilienfeld that there are large differences in the energy errors, suggesting strongly that the MD-17 energy labels are noisy and should not be used anymore, but revMD17 instead.

Paper: arxiv.org/abs/2101.03164…
Code: github.com/mir-group/nequ…

The code is integrated with LAMMPS and easy to use, you can find an intro Colab into NequIP + the LAMMPS integration here: bit.ly/nequip-tutorial

Work done together with a brilliant team of collaborators and friends: Albert Musaelian, Lixin Sun, Mario Geiger/@mario1geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt/@tesssmidt , Boris Kozinsky/@bkoz37

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling