Graph networks are limited to pairwise interactions. How to include higher-order components?
Read more below 👇 /n
The paper considers simplicial complexes, nice mathematical objects where having a certain component (e.g., a 3-way interaction in the graph) means also having all the lower level interactions (e.g., all pairwise interactions between the 3 objects). /n
Simplicial complexes have many notions of "adjacency" (four in total), considering lower- and upper- interactions.
They first propose an extension of the Weisfeiler-Lehman test that includes all four of them, showing it is slightly more powerful than standard WL. /n
A message-passing network can be defined similarly, by using four different types of exchange functions.
They also show that two of them are redundant, making the final formulation have only linear scaling properties. /n
They have many interesting experiments by building clique complexes from the original graph, showing MPSNs are better at discriminating graphs.
No public code yet, by it's apparently "coming soon"! 😎
Gather round, Twitter folks, it's time for our beloved
**Alice's adventures in a differentiable wonderland**, our magical tour of autodiff and backpropagation. 🔥
Slides below 1/n 👇
It all started from her belief that "very few things indeed were really impossible". Could AI truly be below the corner? Could differentiability be the only ingredient that was needed?
2/n
Wondering were to start, Alice discovered a paper by pioneer @ylecun promising "a path towards autonomous intelligent agents".
Intelligence would arise, it was argued, by several interacting modules, were everything was assumed to be *differentiable*.
A new method to sample structured objects (eg, graphs, sets) with a formulation inspired to the state space of reinforcement learning.
I have collected a few key ideas and pointers below if you are interested. 👀
1/n
👇
*Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation* #NeurIPS paper by @folinoid@JainMoksh et al. introducing the method.
The task is learning to sample objects that can be built 1 piece at a time ("lego-style").
To a practical course, a practical exam: I asked each student to include a new branch in the repository showcasing additional tools and libraries.
The result? *Everyone* loves some hyper-parameter optimization. 😄
/n
Thanks to their work, you'll find practical examples of fine-tuning parameters using @OptunaAutoML, AX (from @facebookai), @raydistributed Tune, and Auto-PyTorch and Talos coming soon.
An emerging approach in generative modelling that is gathering more and more attention.
If you are interested, I collected some introductive material and thoughts in a small thread. 👇
Feel free to weigh in with additional material!
/n
An amazing property of diffusion models is simplicity.
You define a probabilistic chain that gradually "noise" the input image until only white noise remains.
Then, generation is done by learning to reverse this chain. In many cases, the two directions have similar form.
/n
The starting point for diffusion models is probably "Deep Unsupervised Learning using Nonequilibrium Thermodynamics" by @jaschasd Weiss @niru_m@SuryaGanguli