Alex Rives Profile picture
Jul 21, 2022 6 tweets 5 min read Read on X
We have trained ESMFold to predict full atomic protein structure directly from language model representations of a single sequence. Accuracy is competitive with AlphaFold on most proteins with order of magnitude faster inference. By @MetaAI Protein Team.

biorxiv.org/content/10.110…
We train ESM2 language models from 8M up to 15B parameters. Improvements in language modeling perplexity and learning of structure continue through 15B. ESM2 at 150M parameters is better than ESM1b at 650M parameters. Image
As ESM2 processes a protein sequence, a picture of the protein’s structure materializes in its internal states that enables atomic resolution predictions of the 3D structure, even though the language model was only trained on sequences. Image
There are billions of protein sequences with unknown structure and function, many from metagenomic sequencing. ESMFold makes it feasible to map this structural space in practical timescales. We were able to fold a random sample of 1M metagenomic sequences in a few hours. Image
A large fraction have high confidence and are different from any known experimental structure. Many have sequences without matches in annotated sequence databases. We think ESMFold can help to understand regions of protein space that are distant from existing knowledge. Image
Work by the Protein Team at Meta AI FAIR @ebetica @halilakin @proteinrosh @BrianHie @ZhongkaiZhu Wenting Lu @AllanSanCosta Maryam Fazel-Zarandi @TomSercu @salcandido

Building on great open source projects @open_fold @fairseq fairscale foldseek, and many others.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Rives

Alex Rives Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexrives

Dec 23, 2022
In two new papers we have found that the ESM2 language model generalizes beyond natural proteins, and enables programmable generation of complex and modular protein structures.
ESM2 learns the design principles of proteins. With @uwproteindesign we experimentally validated 152 ESM2 designs, including de novo generations outside the space of natural proteins (<20% sequence identity to known proteins).

📄Read the paper here: biorxiv.org/content/10.110… Generated proteins are distinct from natural proteins
We implemented a high level programming language for generative protein design with ESM2. This made it possible to program the generation of large proteins and complexes with intricate and modular structures.

📄Read the paper here: biorxiv.org/content/10.110… Proteins with programmable symmetry
Read 5 tweets
Dec 4, 2020
Very exciting results this week from AlphaFold in CASP14. An incredible and inspiring achievement by the DeepMind team. Many new possibilities.

*Attention* mechanism is key to the result. Interestingly we find the exact same in our work on *unsupervised* learning for proteins.
The idea in protein language modeling: learn biology directly from patterns in sequences from across evolution.

Protein language modeling is unsupervised, i.e. it learns from sequences, not structures. (AlphaFold learns from structures).
That structure can be found in the patterns of sequences is a longstanding idea in biology.

With AI approaches we can scale to millions to billions of diverse sequences with hundreds of millions to billions of parameters.
Read 5 tweets
Sep 2, 2020
1/9 Today we’re excited to release Transformer models pre-trained on evolutionary-scale protein sequence data along with a major update to our preprint from last year:

Paper: biorxiv.org/content/10.110…
Models: github.com/facebookresear…
2/9 We added extensive new benchmarks for remote homology, secondary structure, long range contacts, and mutational effect. Improvements to downstream models lead to SOTA features across multiple benchmarks.
3/9 There are two larger questions we’re interested in answering: (1) can language models learn biology from sequences; (2) are there favorable scaling laws for data and model parameters, i.e. similar to those observed in NLP. In new work we find support for both.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(