Research at Google DeepMind. In a previous life created Equivariant Interatomic Potentials.
Apr 21, 2023 β’ 10 tweets β’ 4 min read
π¨Deep learning for large-scale biomolecular dynamics is here!π¨
Today, our group is releasing new work showing how the SOTA accuracy of Allegro can be scaled to massive biomolecular systems up to the full HIV capsid at 44 million atoms!
1/π§΅
We scale a large, pretrained Allegro model on various systems, from DHFR at 23k atoms, to Factor IX at 91k, Cellulose at 400k, the all-atom fully solvated HIV capsid at 44 million all the way up to >100 million atoms. 2/
Dec 1, 2022 β’ 14 tweets β’ 3 min read
A conversation with ChatGPT about DFT, quantum chemistry, and machine learning.
𧡠1/ 2/
Apr 12, 2022 β’ 18 tweets β’ 10 min read
π¨ What comes after Neural Message Passing? π¨
Introducing Allegro - a new approach:
- no message passing or attention
- new SOTA on QM9+MD17
- scales to >100 million atoms
- 1 layer beats all MPNNs+Transformers
- blazing fast
- theory
How?π#compchem
First and foremost: this was joint with my co-first author and good friend Albert Musaelian with equal first-author contribution as well as with lab members Anders Johansson, Lixin Sun, Cameron Owen + Mordechai Kornbluth and of course @BKoz / Boris Kozinsky
Dec 17, 2021 β’ 14 tweets β’ 8 min read
ππ¨ Equivariance changes the Scaling Laws of Interatomic Potentials π¨π
We have updated the NequIP paper with up to 3x lower errors + (still) SOTA on MD17
Plus: equivariance changes the power-law exponent of the learning curve!
π𧡠#compchem#GNN
Learning curves of error vs training set size typically follow a power law: error = a * N^b, where N is the number of training samples and the exponent b determines how fast a method learns as new data become available.
Jan 11, 2021 β’ 16 tweets β’ 6 min read
We're excited to introduce NequIP, an equivariant Machine Learning Interatomic Potential that not only obtains SOTA on MD-17, but also outperforms existing potentials with up to 1000x fewer data! w/ @tesssmidt@Materials_Intel@bkoz37#compchemπ𧡠1/N
arxiv.org/pdf/2101.03164β¦
NequIP (short for Neural Equivariant Interatomic Potentials) extends Graph Neural Network Interatomic Potentials that use invariant convolutions over scalar feature vectors to instead utilize rotation-equivariant convolutions over tensor features (i.e. scalars, vectors, ...). 2/N