Many methods like GREMLIN, MSA_transformer, RosetTTAFold and AlphaFold rely on input MSA generated by non-differentiable methods. (2/8)
We ask the question, what if we make the red arrow differentiable and optimize end-to-end. (3/8)
To accomplish this, we implement a differentiable alignment module (LAM). More specifically a vectorized/ striped smith-waterman via #JAX that is extremely fast (4/8)
Given AlphaFold and LAM are conveniently implemented in #JAX, as a proof-of-concept, we backprop through AlphaFold and LAM to maximize the confidence metrics (pLDDT and pAE) (5/8)
Maximizing pLDDT (and potentially "learning" a more optimal MSA) often increases structure prediction over our initial input MSAs. (6/8)
LAM also allows us to convert GREMLIN into SMURF (Smooth Markov Unaligned Random Field) that simultaneously learns an MSA, coevolution and conservation of a given rna/protein family. (7/8)
Learning the MSA+Coevolution end-to-end matches and sometimes exceeds the performance of precomputed MSA on proteins and RNA for task of contact prediction. (8/8)
We'll make the code public in a day or two. The owner of our shared GitHub account is currently traveling. 😂
@jakevdp Oops! Thanks to @thesteinegger for pointing out we had actually implemented an "anti-diagonal" not a "striped" vectorization of smith-waterman.
First described by Wozniak et al. Using video-oriented instructions to speed up sequence comparison. (1997)
Weekend project: Comparing ESM3 from @EvoscaleAI to ESM2 and inv_cov. The ultimate test of a protein language models is how well the pairwise dependencies it learns correlate to structure. (1/8)
Traditional methods approx this signal by taking a multiple sequence alignment of a protein family and computing the inverse covariance matrix. For pLMs we extract it by computing a jacobian over the sequence track (for esm3, structure is masked). (2/8)
Each dot is a different protein family, I'm reporting contact accuracy for each, comparing invcov(msa) to cat_jac(esm(seq)). ESM3 is doing signficantly better at this task! (3/8)
Towards the end of the presentation I went down a bit of a rabbit hole trying to demonstrate that AF3 may still be learning to invert the convariance matrix, which is needed to extract the coevolution signal from input multiple sequence alignment (MSA) (1/9).
For context, traditional methods like GREMLIN extract coevolution from input MSA. If you make the assumption that data is non-categorical, you can approximate the coevolution signal via inverse-covariance matrix (2/9). arxiv.org/abs/1906.02598
The inverse can be computed by downweighting the largest eigenvectors by 1/eigenvalue.
Fun fact: the L2 regularization weight (aka shrinkage) in the previous slide is is used as a pseudo-count to avoid dividing by zero: 1/(eigenvalue + l2reg)
I tried running our categorical Jacobian method (for extracting coevolution signal from language models) on Evo from @BrianHie @pdhsu on the 16S rRNA. It appears to pickup on local hairpins 🤓(1/3).
@BrianHie @pdhsu No strong long-range contacts though... (2/3)
Same with tRNA, seems to be only picking up on local hairpins. (3/3)
A recent preprint from @Lauren_L_Porter shows that it's sometimes possible to sample the alternative conformation of metamorphic proteins by removing the MSA. Though I think this is a very interesting observation, I disagree that coevolution is not used when it is provided. (1/9)
We believe AlphaFold has learned some approximation of an "energy function" and a limited ability to explore. But this is often not enough to find the correct conformation, and often an MSA is required to reduce the search space. (2/9)
For small single-domain monomeric proteins (that were in the training set) we see that alphafold often fails to predict from single sequence. Adding extra information (such as conservation [pssm or msa, but coevolution ablated via column shuffling]) helps. (3/9)