Daniele Grattarola Profile picture
Postdoc @EPFL | Graph neural networks | Protein design | https://t.co/AThPhQYL7Q | @SmarterPodcast | Ex IDSIA, @USI_INF, @neuron2brain
Oct 7, 2022 11 tweets 5 min read
📣📄 Introducing "Generalised Implicit Neural Representations"!

We study INRs on arbitrary domains discretized by graphs.
Applications in biology, dynamical systems, meteorology, and DEs on manifolds!

#NeurIPS2022 paper with @trekkinglemon
arxiv.org/abs/2205.15674

1/n 🧵 First, what is an INR? It's just a neural network that approximates a signal on some domain.

Typically, the domain is a hypercube and the signal is an image or 3D scene.

We observe samples of the signal on a lattice (eg, pixels), and we train the INR to map x -> f(x).
Oct 28, 2021 4 tweets 3 min read
Here's why I like ✨graph cellular automata✨:

1. Decentralized / emergent computation on graphs is a fundamental principle of Nature
2. We can control their behavior using GNNs
3. They make oscillating bunnies sometimes 🐰

Soon at #NeurIPS2021

arxiv.org/abs/2110.14237 In the paper, we explore the most general possible setting for CA and show that we can learn arbitrary transition rules with GNNs.
Possible applications of this are in swarm optimization, neuroscience, epidemiology, IoT, traffic routing... you name it.
Oct 12, 2021 12 tweets 4 min read
In our new paper, we introduce a unifying and modular framework for graph pooling: Select, Reduce, Connect.
We also propose a taxonomy of pooling and show why small-graph classification is not telling us the full story.

Arxiv: arxiv.org/abs/2110.05292

Time for a 🧵on 🎱: Image Let's start from SRC, the "message-passing" of pooling.

S: Selects (some) input nodes to map to one (or more) supernodes. Essentially decides what information is contained in the new nodes.

R: Reduces the supernodes to singletons.

C: decides how the new nodes are Connected. Image