Gabriele Corso @ICLR2025 Profile picture
PhD student @MIT • Research on Generative Models and Geometric Deep Learning for Biophysics • BA @CambridgeUni • Former @TwitterResearch, @DEShawGroup and @IBM
Apr 22 7 tweets 3 min read
Happy to finally release our work on "Composing Unbalanced Flows for Flexible Docking and Relaxation" (FlexDock) that we will be presenting as an oral at #ICLR2025 ! 🤗✈️🇸🇬 A thread! 🧵 Image @NoahBGetz @BarzilayRegina @arkrause TLDR: We studied the problem of flexible molecular docking and the issues with existing methods for the task. We came up with a couple of interesting technical ideas that we validated at small scale in this work and are now making their way into upcoming versions of Boltz! 🔥🚀
Nov 17, 2024 6 tweets 2 min read
Thrilled to announce Boltz-1, the first open-source and commercially available model to achieve AlphaFold3-level accuracy on biomolecular structure prediction! An exciting collaboration with @jeremyWohlwend, @pas_saro and an amazing team at MIT and Genesis Therapeutics. A thread! @jeremyWohlwend @pas_saro We test Boltz-1 on various benchmarks and show it matches the performance of Chai-1. E.g. on CASP15, Boltz-1 demonstrates strong protein-ligand and protein-protein performance achieving an LDDT-PLI of 65% (40% for Chai-1), and a proportion of DockQ>0.23 of 83% (76% for Chai-1) Image
Feb 29, 2024 10 tweets 4 min read
Excited to finally be able to share our ICLR work critically analyzing the capacity of deep learning docking methods to generalize and how to improve this (spoiler scaling, augmentation and RL)! With this, we release a new significantly improved version of DiffDock!

A thread! 🧵 Image Solving the general blind docking task would have profound biomedical implications. It would help us understand the mechanism of action of new drugs, predict adverse side-effects before clinical trials… But all these require methods to generalize beyond few well-studied proteins
Oct 5, 2022 8 tweets 7 min read
Excited to share DiffDock, new non-Euclidean diffusion for molecular docking! In PDBBind, standard benchmark, DiffDock outperforms by a huge margin (38% vs 23%) the previous state-of-the-art methods that were based on expensive search!
arxiv.org/abs/2210.01776
A thread! 👇 @HannesStaerk @BarzilayRegina Recent regression-based ML methods for docking showed strong speed-up but no significant accuracy improvements over traditional search-based approaches. We identify the problem in their objective functions and show how generative modeling aligns well with the docking task.