First and foremost: this was joint with my co-first author and good friend Albert Musaelian with equal first-author contribution as well as with lab members Anders Johansson, Lixin Sun, Cameron Owen + Mordechai Kornbluth and of course @BKoz / Boris Kozinsky
Message Passing Neural Networks have taken molecular ML by storm and over the past few years, a lot of progress in Machine Learning for molecules and materials has been variations on this theme.
We propose an alternative approach: a new layer based simply on tensor products. We demonstrate it outperforms all existing approaches, including all Message Passing Neural Networks + Transformers. No messages passing, no attention.
The interesting part: it's purely local, using strict cutoffs and no propagation of information. Due to this locality, we can make us of GPU-parallelism and scale it to massive systems. This also challenges the idea that the best models need Message Passing or Attention.
Here's how it works: we start by decomposing the total energy of the system into a set of pairwise contributions E_(i,j), instead of the conventional per-atom decomposition. We then embed these pairs of atoms (i,j) into a weighted spherical harmonics projection.
These E(3)-equivariant features that describe the state of pair (i,j) are updated by computing a tensor product between the current pair feature (i,j) and all other pair features in the environment of atom i, i.e. computing the tensor product between (i,j) and all pairs (i, k).
Naively, this gives exponential scaling. The central mathematical trick is that we can exploit the bilinearity of the tensor product to first compute the sum over the environment and then compute the tensor product. This reduces this operation to a single tensor product.
The weights for the embedding are generated by a MLP. We mix the tensor product outputs by an equivariant linear layer/a MLP and then iteratively apply this procedure. At the output, we sum the pair energies E_(i,j) into the total energy. Autodiff for the forces. That's it.
We benchmark this extremely simple idea and find state-of-the-art performance on revMD17 and QM9, outperforming all other methods. Interestingly, we find that even a single tensor product layer outperforms all message-passing + transformer-based approaches on QM9!
We also test generalization to out-of-distribution data on the 3BPA benchmark and find that Allegro greatly outperforms existing potentials, including also an ANI model pretrained on 8.9 million molecules (we use 500 molecules). The only method that is competitive is NequIP.
This breaks with the notion that Deep Learning Interatomic Potentials don't generalize. They often generalize better, in fact much better than linear models or kernel methods, but you have to use the right ones.
Due to the locality, Allegro is extremely scalable: we show that on as little as 16 GPU nodes (8xA100s each), we can scale it up to 100 million atoms and a speed of 1.5 ns/day, all at excellent accuracy (previous approaches had to use ~27,000 GPUs for sizes like these).
Oh and it is fast: on DFT-sized systems, we can simulate ~90ns/day on 1 NVIDIA DGX A100 GPU, ~10ns/day on 1 million atoms on 1 node, ~50 ns/day on 1 million atoms on 8 nodes!
Everything is integrated with LAMMPS running fully on the GPU (inference + integration) --> no CPU-GPU transfer! We report strong scaling results and see a nice scale-up both intra- and inter-node.
We then show Allegro works well in the wild, i.e. not just MD-17 ;-) We demonstrate Allegro predicts structure + Li-dynamics of a complex amorphous phosphate material with an extremely scalable model.
Code will be made public soon. It's integrated with LAMMPS+ASE. Everything is built on top of our NequIP API to make usage as simple as possible (github.com/mir-group/nequβ¦). If you're a NequIP user, this is a few-lines change in your input file (if not, welcome to the family)
π¨Deep learning for large-scale biomolecular dynamics is here!π¨
Today, our group is releasing new work showing how the SOTA accuracy of Allegro can be scaled to massive biomolecular systems up to the full HIV capsid at 44 million atoms!
We scale a large, pretrained Allegro model on various systems, from DHFR at 23k atoms, to Factor IX at 91k, Cellulose at 400k, the all-atom fully solvated HIV capsid at 44 million all the way up to >100 million atoms. 2/
This is all done with a pretrained Allegro model w/ 8 million weights at high accuracy of a force error of 26 meV/A, trained on 1mn structures at hybrid functional accuracy using the amazing SPICE dataset. At 8 million weights this is a large+powerful model were scaling here. 3/
Learning curves of error vs training set size typically follow a power law: error = a * N^b, where N is the number of training samples and the exponent b determines how fast a method learns as new data become available.
Interestingly, it has been found that different models on the same data set usually only shift the learning curve, but do not change the power-law exponent, see e.g. [1]
We're excited to introduce NequIP, an equivariant Machine Learning Interatomic Potential that not only obtains SOTA on MD-17, but also outperforms existing potentials with up to 1000x fewer data! w/ @tesssmidt@Materials_Intel@bkoz37#compchemπ𧡠1/N
NequIP (short for Neural Equivariant Interatomic Potentials) extends Graph Neural Network Interatomic Potentials that use invariant convolutions over scalar feature vectors to instead utilize rotation-equivariant convolutions over tensor features (i.e. scalars, vectors, ...). 2/N
We benchmark NequIP on a wide variety of molecules+materials: we start with atomic forces from MD-17 with 1,000 training configurations and find that we not only outperform other deep neural networks, but also perform better or sometimes on par with kernel-based methods. 3/N