Alex Rives Profile picture
Sep 2, 2020 9 tweets 3 min read Read on X
1/9 Today we’re excited to release Transformer models pre-trained on evolutionary-scale protein sequence data along with a major update to our preprint from last year:

Paper: biorxiv.org/content/10.110…
Models: github.com/facebookresear…
2/9 We added extensive new benchmarks for remote homology, secondary structure, long range contacts, and mutational effect. Improvements to downstream models lead to SOTA features across multiple benchmarks.
3/9 There are two larger questions we’re interested in answering: (1) can language models learn biology from sequences; (2) are there favorable scaling laws for data and model parameters, i.e. similar to those observed in NLP. In new work we find support for both.
4/9 Last year in the first version of the paper, we scaled Transformer models with ~700M parameters to 250M protein sequences in UniParc. The models learn about the intrinsic properties of proteins.
5/9 In new experiments we look at differences between datasets (UniRef50 vs UniRef100), model architectures (LSTMs vs Transformers), and parameters (small vs large Transformers).

Transformer architectures (vs LSTM), diversity in data, and scale in parameters have big impact.
6/9 Combining features from representation learning with features used in SOTA structure prediction methods improves performance. Example, secondary structure prediction:
7/9 Long-range contact prediction:
8/9 Other great work investigating representation learning for protein sequences:

UniRep. biorxiv.org/content/10.110…
SeqVec. biorxiv.org/content/10.110…
TAPE. arxiv.org/abs/1906.08230
9/9 A first answer to the question about scaling laws. Relationship between language modeling fidelity and downstream performance is linear over the course of training! Suggests results will continue to improve with scale.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Alex Rives

Alex Rives Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @alexrives

Dec 23, 2022
In two new papers we have found that the ESM2 language model generalizes beyond natural proteins, and enables programmable generation of complex and modular protein structures.
ESM2 learns the design principles of proteins. With @uwproteindesign we experimentally validated 152 ESM2 designs, including de novo generations outside the space of natural proteins (<20% sequence identity to known proteins).

📄Read the paper here: biorxiv.org/content/10.110… Generated proteins are distinct from natural proteins
We implemented a high level programming language for generative protein design with ESM2. This made it possible to program the generation of large proteins and complexes with intricate and modular structures.

📄Read the paper here: biorxiv.org/content/10.110… Proteins with programmable symmetry
Read 5 tweets
Jul 21, 2022
We have trained ESMFold to predict full atomic protein structure directly from language model representations of a single sequence. Accuracy is competitive with AlphaFold on most proteins with order of magnitude faster inference. By @MetaAI Protein Team.

biorxiv.org/content/10.110…
We train ESM2 language models from 8M up to 15B parameters. Improvements in language modeling perplexity and learning of structure continue through 15B. ESM2 at 150M parameters is better than ESM1b at 650M parameters. Image
As ESM2 processes a protein sequence, a picture of the protein’s structure materializes in its internal states that enables atomic resolution predictions of the 3D structure, even though the language model was only trained on sequences. Image
Read 6 tweets
Dec 4, 2020
Very exciting results this week from AlphaFold in CASP14. An incredible and inspiring achievement by the DeepMind team. Many new possibilities.

*Attention* mechanism is key to the result. Interestingly we find the exact same in our work on *unsupervised* learning for proteins.
The idea in protein language modeling: learn biology directly from patterns in sequences from across evolution.

Protein language modeling is unsupervised, i.e. it learns from sequences, not structures. (AlphaFold learns from structures).
That structure can be found in the patterns of sequences is a longstanding idea in biology.

With AI approaches we can scale to millions to billions of diverse sequences with hundreds of millions to billions of parameters.
Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(