Daniele Grattarola Profile picture
Postdoc @EPFL | Graph neural networks | Protein design | https://t.co/AThPhQYL7Q | @SmarterPodcast | Ex IDSIA, @USI_INF, @neuron2brain

Oct 7, 2022, 11 tweets

๐Ÿ“ฃ๐Ÿ“„ Introducing "Generalised Implicit Neural Representations"!

We study INRs on arbitrary domains discretized by graphs.
Applications in biology, dynamical systems, meteorology, and DEs on manifolds!

#NeurIPS2022 paper with @trekkinglemon
arxiv.org/abs/2205.15674

1/n ๐Ÿงต

First, what is an INR? It's just a neural network that approximates a signal on some domain.

Typically, the domain is a hypercube and the signal is an image or 3D scene.

We observe samples of the signal on a lattice (eg, pixels), and we train the INR to map x -> f(x).

Here we study the setting where, instead of samples on a lattice, we observe samples on a graph.

This means that the domain can be any topological space, but we generally don't know what that looks like.

To learn an INR in this case, we need a coordinate system to consistently identify points (nodes).

We achieve this with a spectral embedding of the graph, which provides a discrete approximation of the continuous Laplace-Beltrami eigenfunctions of the domain.

We start by learning some signals on the Stanford bunny ๐Ÿฐ, the surface of a protein ๐Ÿงฌ, and a social network ๐ŸŒ.

Then we study the transferability of generalized INRs by looking at random graph models and super-resolution on manifolds.

We also look at conditioning the generalized INR on some global parameter, like time, which allows us to parametrize spatio-temporal signals on manifolds.

Then we look into using a single INR to store multiple signals for multiple domains.

The INR can memorize the electrostatics of up to 1000 proteins almost perfectly.

Finally, we explore a real-world application of generalized INRs to model meteorological signals (on the ๐ŸŒ).

We train the model at a low spatial and temporal resolution and then predict the signal at double the resolution.

The results are quite stunning!

We also tried an experiment (suggested by a reviewer) where we supervise the INR using the Laplacian of the signal.

This opens up a lot of interesting possibilities (eg, see arxiv.org/abs/2209.03984).

And that's all! I really enjoyed working on this paper, which was the result of many interesting discussions with amazing people.

We have lots of follow-up ideas on generalized @neural_fields that came up from this work, so stay tuned for the future!

See you at NeurIPS โœจ

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling