Daniele Grattarola Profile picture
Oct 7 β€’ 11 tweets β€’ 5 min read
πŸ“£πŸ“„ Introducing "Generalised Implicit Neural Representations"!

We study INRs on arbitrary domains discretized by graphs.
Applications in biology, dynamical systems, meteorology, and DEs on manifolds!

#NeurIPS2022 paper with @trekkinglemon
arxiv.org/abs/2205.15674

1/n 🧡
First, what is an INR? It's just a neural network that approximates a signal on some domain.

Typically, the domain is a hypercube and the signal is an image or 3D scene.

We observe samples of the signal on a lattice (eg, pixels), and we train the INR to map x -> f(x).
Here we study the setting where, instead of samples on a lattice, we observe samples on a graph.

This means that the domain can be any topological space, but we generally don't know what that looks like.
To learn an INR in this case, we need a coordinate system to consistently identify points (nodes).

We achieve this with a spectral embedding of the graph, which provides a discrete approximation of the continuous Laplace-Beltrami eigenfunctions of the domain.
We start by learning some signals on the Stanford bunny 🐰, the surface of a protein 🧬, and a social network 🌐.
Then we study the transferability of generalized INRs by looking at random graph models and super-resolution on manifolds.
We also look at conditioning the generalized INR on some global parameter, like time, which allows us to parametrize spatio-temporal signals on manifolds.
Then we look into using a single INR to store multiple signals for multiple domains.

The INR can memorize the electrostatics of up to 1000 proteins almost perfectly.
Finally, we explore a real-world application of generalized INRs to model meteorological signals (on the 🌍).

We train the model at a low spatial and temporal resolution and then predict the signal at double the resolution.

The results are quite stunning!
We also tried an experiment (suggested by a reviewer) where we supervise the INR using the Laplacian of the signal.

This opens up a lot of interesting possibilities (eg, see arxiv.org/abs/2209.03984).
And that's all! I really enjoyed working on this paper, which was the result of many interesting discussions with amazing people.

We have lots of follow-up ideas on generalized @neural_fields that came up from this work, so stay tuned for the future!

See you at NeurIPS ✨

β€’ β€’ β€’

Missing some Tweet in this thread? You can try to force a refresh
γ€€

Keep Current with Daniele Grattarola

Daniele Grattarola Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @riceasphait

Oct 28, 2021
Here's why I like ✨graph cellular automata✨:

1. Decentralized / emergent computation on graphs is a fundamental principle of Nature
2. We can control their behavior using GNNs
3. They make oscillating bunnies sometimes 🐰

Soon at #NeurIPS2021

arxiv.org/abs/2110.14237
In the paper, we explore the most general possible setting for CA and show that we can learn arbitrary transition rules with GNNs.
Possible applications of this are in swarm optimization, neuroscience, epidemiology, IoT, traffic routing... you name it.
I have always been fascinated by CA, and I cannot understate how excited I am about this paper and the idea of emergence.

Keep an eye out for this topic, because the community is growing larger every day and doing lots of amazing things.
Read 4 tweets
Oct 12, 2021
In our new paper, we introduce a unifying and modular framework for graph pooling: Select, Reduce, Connect.
We also propose a taxonomy of pooling and show why small-graph classification is not telling us the full story.

Arxiv: arxiv.org/abs/2110.05292

Time for a 🧡on 🎱: Image
Let's start from SRC, the "message-passing" of pooling.

S: Selects (some) input nodes to map to one (or more) supernodes. Essentially decides what information is contained in the new nodes.

R: Reduces the supernodes to singletons.

C: decides how the new nodes are Connected. Image
Why is this important? Well, it's a simple idea, but it lets us unify seemingly incompatible things like DiffPool and TopK under a common framework.

SRC is also a useful template for the community to share their contributions (e.g., in Spektral or PyG). Image
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(