Simon Batzner Profile picture
Dec 17, 2021 β€’ 14 tweets β€’ 8 min read β€’ Read on X
πŸš€πŸš¨ Equivariance changes the Scaling Laws of Interatomic Potentials πŸš¨πŸš€

We have updated the NequIP paper with up to 3x lower errors + (still) SOTA on MD17

Plus: equivariance changes the power-law exponent of the learning curve!

arxiv.org/abs/2101.03164…

πŸ‘‡πŸ§΅ #compchem #GNN
Learning curves of error vs training set size typically follow a power law: error = a * N^b, where N is the number of training samples and the exponent b determines how fast a method learns as new data become available.
Interestingly, it has been found that different models on the same data set usually only shift the learning curve, but do not change the power-law exponent, see e.g. [1]

[1] arxiv.org/abs/1712.00409
The same trend is observed on MD-17: different models all share the same slope and better models only shift the learning curve down, but don't change the slope. This holds across kernels/deep NNs + different descriptors, they all share approximately the same slope.
NequIP completely breaks the pattern! We observe a larger slope than all other methods. That means NequIP learns faster as new data become available. As discussed, this is against common wisdom.
Now for the interesting part: when we turn off equivariance in NequIP and make it invariant, we recover the same slope that all other models observe! (reminder: an invariant l=0 NequIP network is equivalent to SchNet, i.e. only scalar features and only scalar interactions)
Further increasing the irreps order l in NequIP to l={2, 3} again only shifts the learning curve, but does not increase the slope.
This isolates the effect of equivariance as the culprit for this change in exponent. We also ran versions of these learning curves that control for the fact that the equivariant network has more weights and more hidden features (see appendix). In both cases, the effect holds.
We show this on two different systems: water and aspirin from MD17, in both cases the same effect is observed (plot below is water). The trend also holds on energies in the case where they are trained together with forces (i.e. a typical ML Potential training).
Besides learning curves, we also added new data with higher-order (l>1) tensors on all systems + energy errors. We also ran an explicit l-scan and see a consistent improvement from using l>1 tensors. There is a trade-off on memory/accuracy, but if you can afford them, use them!
We also compare MD17 and revMD17 and find in accordance with work from @AndersSChristen+@ProfvLilienfeld that there are large differences in the energy errors, suggesting strongly that the MD-17 energy labels are noisy and should not be used anymore, but revMD17 instead.
Paper: arxiv.org/abs/2101.03164…
Code: github.com/mir-group/nequ…

The code is integrated with LAMMPS and easy to use, you can find an intro Colab into NequIP + the LAMMPS integration here: bit.ly/nequip-tutorial
Work done together with a brilliant team of collaborators and friends: Albert Musaelian, Lixin Sun, Mario Geiger/@mario1geiger, Jonathan P. Mailoa, Mordechai Kornbluth, Nicola Molinari, Tess E. Smidt/@tesssmidt , Boris Kozinsky/@bkoz37

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Simon Batzner

Simon Batzner Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @simonbatzner

Apr 21, 2023
🚨Deep learning for large-scale biomolecular dynamics is here!🚨

Today, our group is releasing new work showing how the SOTA accuracy of Allegro can be scaled to massive biomolecular systems up to the full HIV capsid at 44 million atoms!

arxiv.org/pdf/2304.10061… #compchem

1/🧡
We scale a large, pretrained Allegro model on various systems, from DHFR at 23k atoms, to Factor IX at 91k, Cellulose at 400k, the all-atom fully solvated HIV capsid at 44 million all the way up to >100 million atoms. 2/ Image
This is all done with a pretrained Allegro model w/ 8 million weights at high accuracy of a force error of 26 meV/A, trained on 1mn structures at hybrid functional accuracy using the amazing SPICE dataset. At 8 million weights this is a large+powerful model were scaling here. 3/
Read 10 tweets
Dec 1, 2022
A conversation with ChatGPT about DFT, quantum chemistry, and machine learning.

🧡 1/
2/
3/
Read 14 tweets
Apr 12, 2022
🚨 What comes after Neural Message Passing? 🚨

Introducing Allegro - a new approach:

- no message passing or attention
- new SOTA on QM9+MD17
- scales to >100 million atoms
- 1 layer beats all MPNNs+Transformers
- blazing fast
- theory

arxiv.org/abs/2204.05249

How?πŸ‘‡#compchem Image
First and foremost: this was joint with my co-first author and good friend Albert Musaelian with equal first-author contribution as well as with lab members Anders Johansson, Lixin Sun, Cameron Owen + Mordechai Kornbluth and of course @BKoz / Boris Kozinsky
Message Passing Neural Networks have taken molecular ML by storm and over the past few years, a lot of progress in Machine Learning for molecules and materials has been variations on this theme.
Read 18 tweets
Jan 11, 2021
We're excited to introduce NequIP, an equivariant Machine Learning Interatomic Potential that not only obtains SOTA on MD-17, but also outperforms existing potentials with up to 1000x fewer data! w/ @tesssmidt @Materials_Intel @bkoz37 #compchemπŸ‘‡πŸ§΅ 1/N

arxiv.org/pdf/2101.03164… Image
NequIP (short for Neural Equivariant Interatomic Potentials) extends Graph Neural Network Interatomic Potentials that use invariant convolutions over scalar feature vectors to instead utilize rotation-equivariant convolutions over tensor features (i.e. scalars, vectors, ...). 2/N
We benchmark NequIP on a wide variety of molecules+materials: we start with atomic forces from MD-17 with 1,000 training configurations and find that we not only outperform other deep neural networks, but also perform better or sometimes on par with kernel-based methods. 3/N Image
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(