We’re excited to introduce @ChaiDiscovery and release Chai-1, a foundation model for molecular structure prediction that performs at the state-of-the-art across a variety of drug discovery tasks
We're releasing inference code, weights & a web interface: chaidiscovery.com/blog/introduct…
We tested Chai-1 across a number of benchmarks, and found that the model achieves a 77% success rate on the PoseBusters benchmark (vs. 76% by AlphaFold3).
Jan 10, 2023 • 10 tweets • 5 min read
📢 Our new #manuscript shows how zero-shot Generative AI can create de novo antibodies from scratch.
🏅Hundreds of antibodies are created zero-shot and validated in the wet-lab for the first time ever.
Note: all proteins binding the target (or its homologs) were removed from the training set.
Sep 2, 2020 • 8 tweets • 5 min read
Excited to share an update to our work on evolutionary-scale modeling (ESM)! Over the past year, we rewrote our paper with better pretraining and downstream models, leading to state-of-the-art results across multiple benchmarks. (1/8) biorxiv.org/content/10.110…
Last year, we showed that Transformer language models learn intrinsic properties of proteins from sequences. But on quantitative benchmarks, these models did not improve over alignment-based methods, as shown by @roshan_m_rao, et al in TAPE.😵(2/8)