Chaitanya K. Joshi Profile picture
PhD student @Cambridge_CL. Research Intern @AIatMeta @OpenCatalyst. Interested in Deep Learning for biomolecule design. Organising @LoGConference.
Dec 13, 2023 4 tweets 2 min read
We just released a "Hitchhiker's Guide" for getting started with deep neural nets for 3D structural biology & chemistry -- we hope it will be useful for newcomers! 🤗

Learn about the core architectures powering protein design, material discovery, molecular simulations, and more! And I learnt so much from my passionate and kind co-authors on this long journey of writing something together without targeting any venues or aiming to publish 💙

We focussed purely on explaining everything simply and clearly -- let us know how to improve it!
Jan 24, 2023 7 tweets 6 min read
🚀Excited+nervous to share our latest work on understanding geometric GNNs for biomolecules, materials, etc.

"On the Expressive Power of Geometric GNNs" with @crisbodnar @SimMat20 @TacoCohen @pl219_Cambridge

PDF: arxiv.org/abs/2301.09308
Code: github.com/chaitjo/geomet…
Findings👇 Axes of geometric GNN expressivity: (1) Scalarisation body o How powerful are geometric GNNs? How do design choices influence expressivity?

💡Key idea: Geometric graph isomorphism + Geometric WL framework --> upper bound on geom. GNN expressivity.

Standard GNN tools (WL) are inapplicable due to new physical symmetries (roto-translation).
Mar 15, 2022 9 tweets 3 min read
🚨 New blogpost alert v2:

"Recent Advances in Efficient and Scalable Graph Neural Networks"

Read on for an overview of the toolbox enabling Graph Neural Networks to scale to real-world graphs and real-time applications! 👇

chaitjo.com/post/efficient… Training and deploying GNNs to handle real-world graph data poses several theoretical and engineering challenges:
1. Giant Graphs – Memory Limitations
2. Sparse Computations – Hardware Limitations
3. Graph Subsampling – Reliability Limitations Image
Oct 3, 2021 12 tweets 3 min read
Are you applying for a PhD in Machine Learning, Artificial Intelligence, and beyond?

Here's a thread of high-quality resources that helped me understand the process + craft my application better. 👇 Tim Dettmers' guide to ML PhD applications:

timdettmers.com/2018/11/26/phd…
Sep 14, 2020 10 tweets 4 min read
Sparse Transformers / Graph Neural Networks as a general architecture across a variety of domains: a thread on success stories 👇 Firstly (and this is the most well known), large-scale Transformers have seemingly replaced RNNs in commercial NLP pipelines as they scale better: jalammar.github.io/illustrated-be…
Feb 28, 2020 6 tweets 3 min read
Excited to share a blog post on the connection between #Transformers for NLP and #GraphNeuralNetworks (GNNs or GCNs).

graphdeeplearning.github.io/post/transform… Image The key idea: Sentences are fully-connected graphs of words, and Transformers are very similar to Graph Attention Networks (GATs) which use multi-head attention to aggregate features from their neighborhood nodes (i.e., words).
cc. @PetarV_93 Image