For my latest attempt at introducing proteins to students, I made a Google Colab Notebook that predicts proteins from a single sequence. I asked the students to tweak the sequence to get a helix or two helices or... (1/5) colab.research.google.com/github/sokrypt…
I gave them the following cheat sheet: 😅 (2/5)
To make this practical, I had to make various tweaks to AlphaFold to make it compile as fast as possible (~10 seconds) and run as fast as possible (<1 second) and to avoid recompiling if the sequence length changes or the number of recycles changes. (3/5)
I add @david_koes py3Dmol to allow the display of structure with sidechains colored hydrophobic/hydrophilic (to encourage students to think about hydrophobicity). Finally, I add the ability to animate the proteins through recycles at the end. 🙃 (4/5)
Any feedback is welcome! Share your "designed" proteins 😁 and remember: (5/5)
For those that are curious about how I reduced the compile time. I noticed AlphaFold was using a python for-loop for its fold_iteration (within /folding.py). I replaced this with a jax scan fn, reducing compile time from a few of mins to 20 secs. (1/2)
Finally, I only compile a single pass of the model and move the recycling mechanism outside of the compiled code. This lets me run as many ♻️ as I want externally. I just needed to add an option (to /modules.py) to input/output the previous pair/pos via batch. (2/2)
The advantage of moving recycles outside of the compiled code is it allows you to save all the outputs at each ♻️...(allowing for cool animations) and if you are not happy with the result, you can technically resume ♻️ling... without rerunning prev. iterations 🤓 (3/2)
(for Jax people looking at this code. AlphaFold is using some custom "safe" key splitting... And the scan fn is unable to iterate over the split keys... To get around this issue, I added key splitting within the function. Maybe there is a better solution?)
Weekend project: Comparing ESM3 from @EvoscaleAI to ESM2 and inv_cov. The ultimate test of a protein language models is how well the pairwise dependencies it learns correlate to structure. (1/8)
Traditional methods approx this signal by taking a multiple sequence alignment of a protein family and computing the inverse covariance matrix. For pLMs we extract it by computing a jacobian over the sequence track (for esm3, structure is masked). (2/8)
Each dot is a different protein family, I'm reporting contact accuracy for each, comparing invcov(msa) to cat_jac(esm(seq)). ESM3 is doing signficantly better at this task! (3/8)
Towards the end of the presentation I went down a bit of a rabbit hole trying to demonstrate that AF3 may still be learning to invert the convariance matrix, which is needed to extract the coevolution signal from input multiple sequence alignment (MSA) (1/9).
For context, traditional methods like GREMLIN extract coevolution from input MSA. If you make the assumption that data is non-categorical, you can approximate the coevolution signal via inverse-covariance matrix (2/9). arxiv.org/abs/1906.02598
The inverse can be computed by downweighting the largest eigenvectors by 1/eigenvalue.
Fun fact: the L2 regularization weight (aka shrinkage) in the previous slide is is used as a pseudo-count to avoid dividing by zero: 1/(eigenvalue + l2reg)
I tried running our categorical Jacobian method (for extracting coevolution signal from language models) on Evo from @BrianHie @pdhsu on the 16S rRNA. It appears to pickup on local hairpins 🤓(1/3).
@BrianHie @pdhsu No strong long-range contacts though... (2/3)
Same with tRNA, seems to be only picking up on local hairpins. (3/3)
A recent preprint from @Lauren_L_Porter shows that it's sometimes possible to sample the alternative conformation of metamorphic proteins by removing the MSA. Though I think this is a very interesting observation, I disagree that coevolution is not used when it is provided. (1/9)
We believe AlphaFold has learned some approximation of an "energy function" and a limited ability to explore. But this is often not enough to find the correct conformation, and often an MSA is required to reduce the search space. (2/9)
For small single-domain monomeric proteins (that were in the training set) we see that alphafold often fails to predict from single sequence. Adding extra information (such as conservation [pssm or msa, but coevolution ablated via column shuffling]) helps. (3/9)