🚨Deep learning for large-scale biomolecular dynamics is here!🚨
Today, our group is releasing new work showing how the SOTA accuracy of Allegro can be scaled to massive biomolecular systems up to the full HIV capsid at 44 million atoms!
We scale a large, pretrained Allegro model on various systems, from DHFR at 23k atoms, to Factor IX at 91k, Cellulose at 400k, the all-atom fully solvated HIV capsid at 44 million all the way up to >100 million atoms. 2/
This is all done with a pretrained Allegro model w/ 8 million weights at high accuracy of a force error of 26 meV/A, trained on 1mn structures at hybrid functional accuracy using the amazing SPICE dataset. At 8 million weights this is a large+powerful model were scaling here. 3/
We show strong scaling results up to >100 million atoms and across various large systems demonstrate top speeds of >100 steps/second (convert with your favourite time step to the ns/day numbers this brings for massive systems). Weak scaling results reach 70% on 5120 A100s!! 4/
Our goal is here is not to train the most accurate general-purpose potential for biomolecular simulations (although we trained a really powerful one here), but to demonstrate that Allegro is the right tool to do this. With DL models like Allegro, the focus is now on the data. 5/
Our simulations are stable over nanoseconds out of the box and in general we saw hardly any problems during production, even at massive scale of 44 millions atoms of the HIV capsid, which is usually where problems are much more apparent, things just worked out of the box. 6/
This is also our first public demonstration of Allegro-v2, an even faster version of Allegro we've been working on, with an optimized memory layout for equivariant networks, new modes of computing the tensor product, as well as per-pair cutoffs that save us lots of compute. 7/
It is the combination of the locality of Allegro and SOTA equivariance that allows Allegro to do this and why message-passing passing methods have not only been less scalable, but also much slower -> can't spread compute -> slower time to solution. 8/
Thank you to @NERSC for computing resources and Peter Eastman from Stanford for help with SPICE. 9/
We hope this opens up new avenues in biochemistry and drug discovery, allowing us to understand the dynamics of large biomolecular systems and how proteins and drugs interact at the atomistic level. 10/10
• • •
Missing some Tweet in this thread? You can try to
force a refresh
First and foremost: this was joint with my co-first author and good friend Albert Musaelian with equal first-author contribution as well as with lab members Anders Johansson, Lixin Sun, Cameron Owen + Mordechai Kornbluth and of course @BKoz / Boris Kozinsky
Message Passing Neural Networks have taken molecular ML by storm and over the past few years, a lot of progress in Machine Learning for molecules and materials has been variations on this theme.
Learning curves of error vs training set size typically follow a power law: error = a * N^b, where N is the number of training samples and the exponent b determines how fast a method learns as new data become available.
Interestingly, it has been found that different models on the same data set usually only shift the learning curve, but do not change the power-law exponent, see e.g. [1]
We're excited to introduce NequIP, an equivariant Machine Learning Interatomic Potential that not only obtains SOTA on MD-17, but also outperforms existing potentials with up to 1000x fewer data! w/ @tesssmidt@Materials_Intel@bkoz37#compchem👇🧵 1/N
NequIP (short for Neural Equivariant Interatomic Potentials) extends Graph Neural Network Interatomic Potentials that use invariant convolutions over scalar feature vectors to instead utilize rotation-equivariant convolutions over tensor features (i.e. scalars, vectors, ...). 2/N
We benchmark NequIP on a wide variety of molecules+materials: we start with atomic forces from MD-17 with 1,000 training configurations and find that we not only outperform other deep neural networks, but also perform better or sometimes on par with kernel-based methods. 3/N