First, what is an INR? It's just a neural network that approximates a signal on some domain.
Typically, the domain is a hypercube and the signal is an image or 3D scene.
We observe samples of the signal on a lattice (eg, pixels), and we train the INR to map x -> f(x).
Here we study the setting where, instead of samples on a lattice, we observe samples on a graph.
This means that the domain can be any topological space, but we generally don't know what that looks like.
To learn an INR in this case, we need a coordinate system to consistently identify points (nodes).
We achieve this with a spectral embedding of the graph, which provides a discrete approximation of the continuous Laplace-Beltrami eigenfunctions of the domain.
We start by learning some signals on the Stanford bunny π°, the surface of a protein π§¬, and a social network π.
Then we study the transferability of generalized INRs by looking at random graph models and super-resolution on manifolds.
We also look at conditioning the generalized INR on some global parameter, like time, which allows us to parametrize spatio-temporal signals on manifolds.
Then we look into using a single INR to store multiple signals for multiple domains.
The INR can memorize the electrostatics of up to 1000 proteins almost perfectly.
Finally, we explore a real-world application of generalized INRs to model meteorological signals (on the π).
We train the model at a low spatial and temporal resolution and then predict the signal at double the resolution.
The results are quite stunning!
We also tried an experiment (suggested by a reviewer) where we supervise the INR using the Laplacian of the signal.
1. Decentralized / emergent computation on graphs is a fundamental principle of Nature 2. We can control their behavior using GNNs 3. They make oscillating bunnies sometimes π°
In the paper, we explore the most general possible setting for CA and show that we can learn arbitrary transition rules with GNNs.
Possible applications of this are in swarm optimization, neuroscience, epidemiology, IoT, traffic routing... you name it.
I have always been fascinated by CA, and I cannot understate how excited I am about this paper and the idea of emergence.
Keep an eye out for this topic, because the community is growing larger every day and doing lots of amazing things.
In our new paper, we introduce a unifying and modular framework for graph pooling: Select, Reduce, Connect.
We also propose a taxonomy of pooling and show why small-graph classification is not telling us the full story.