🚀Excited+nervous to share our latest work on understanding geometric GNNs for biomolecules, materials, etc.

"On the Expressive Power of Geometric GNNs" with @crisbodnar @SimMat20 @TacoCohen @pl219_Cambridge

PDF: arxiv.org/abs/2301.09308
Code: github.com/chaitjo/geomet…
Findings👇 Axes of geometric GNN expressivity: (1) Scalarisation body o
How powerful are geometric GNNs? How do design choices influence expressivity?

💡Key idea: Geometric graph isomorphism + Geometric WL framework --> upper bound on geom. GNN expressivity.

Standard GNN tools (WL) are inapplicable due to new physical symmetries (roto-translation).
- GWL formalises role of depth, invariance vs. equivariance, body ordering.

- Invariant GNNs cannot tell apart one-hop identical geom. graphs, fail to compute global properties.

- Equivariant GNNs distinguish more graphs; how? Depth propagates local geom. beyond one-hop.
What about practical implications? Synthetic experiments highlight challenges in building maximally powerful geom. GNNs:

- Oversquashing of geom. info. with increased depth.

- Utility of higher order order spherical tensors over cartesian vectors.

Code: github.com/chaitjo/geomet…
And lots more in the full paper! E.g. deep theoretical connections b/w our discrimination-based perspective and universal approximation.

Looking forward to feedback and discussions at upcoming talks or virtually! 🤗

arxiv.org/abs/2301.09308
Thank you very much to anon. reviewers from @LogConference, @neur_reps and @iclr_conf (rejected 😢), as well as supportive colleagues @IlyesBatatia @davkovacs10 @weihua916 @challenger1987 @DutaIulia and several others, for helping improve our work!
P.S. Are you new to geometric graphs and GNNs?

Try this introductory collab before diving into the more advanced stuff:
colab.research.google.com/drive/1p9vlVAU…
with @charlieharris01 and Ramon Vinās, developed for students at @Cambridge_CL and @AIMS_Next.

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chaitanya K. Joshi

Chaitanya K. Joshi Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chaitjo

Mar 15, 2022
🚨 New blogpost alert v2:

"Recent Advances in Efficient and Scalable Graph Neural Networks"

Read on for an overview of the toolbox enabling Graph Neural Networks to scale to real-world graphs and real-time applications! 👇

chaitjo.com/post/efficient…
Training and deploying GNNs to handle real-world graph data poses several theoretical and engineering challenges:
1. Giant Graphs – Memory Limitations
2. Sparse Computations – Hardware Limitations
3. Graph Subsampling – Reliability Limitations Image
The blogpost introduces three simple but effective ideas in the 🛠 'toolbox' 🛠 for developing efficient and scalable GNNs:
1. Graph data preparation and sampling
2. Efficient GNN architecture design
3. Learning Paradigms to improve performance and latency Image
Read 9 tweets
Oct 3, 2021
Are you applying for a PhD in Machine Learning, Artificial Intelligence, and beyond?

Here's a thread of high-quality resources that helped me understand the process + craft my application better. 👇
Tim Dettmers' guide to ML PhD applications:

timdettmers.com/2018/11/26/phd…
Nelson Liu's blogposts on NLP PhD applications + statement of purpose (see his website):

blog.nelsonliu.me/2019/10/24/stu…
Read 12 tweets
Feb 28, 2020
Excited to share a blog post on the connection between #Transformers for NLP and #GraphNeuralNetworks (GNNs or GCNs).

graphdeeplearning.github.io/post/transform… Image
The key idea: Sentences are fully-connected graphs of words, and Transformers are very similar to Graph Attention Networks (GATs) which use multi-head attention to aggregate features from their neighborhood nodes (i.e., words).
cc. @PetarV_93 Image
This connection known to most people, but I've missed having all the information in one place. So I wrote it myself.

I'd love to get feedback and improve this post!
graphdeeplearning.github.io/post/transform…
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(