Dominique Beaini Profile picture
Lead researcher at @valence_ai, Adjunct Prof. at @UMontreal, Associate prof at @Mila_Quebec, interested in drug discovery, GNN, physics, robotics, biology
Jerome Ku Profile picture 1 subscribed
Mar 2, 2022 9 tweets 3 min read
🧵(0/8) Graph theory and GNNs can be scary at first with so many architectures. Here I propose the Maze analogy to help make it more intuitive.

Top 6 strategies for navigating a maze: walking, coloring the way, squeezing through, using a map, destroying walls, or using wings. Image (1/8) Walk and look around: simply walk randomly and look at the corners next to you (GCN, GAT), and the path in-between corners (MPNN).

But different corners look the same (WL-test), it is easy to get lost (over-smoothing) and not reach its destination (over-squashing). Image
Oct 7, 2020 5 tweets 4 min read
Proud to announce our newest graph #research #paper, we introduce directional aggregations, generalize convolutional #neuralnetworks in #graphs and solve bottlenecks in GNNs 1/5
arxiv.org/abs/2010.02863
Authors:@Saro2000 @vincentmillions @pl219_Cambridge @williamleif @GabriCorso By using an underlying vector field F, we can define forward/backward directions and extend differential geometry to include directional smoothing and derivatives. By using different directional fields, the GNN aggregators become powerful enough to generalize CNNs. 2/5