An emerging approach in generative modelling that is gathering more and more attention.
If you are interested, I collected some introductive material and thoughts in a small thread. 👇
Feel free to weigh in with additional material!
/n
An amazing property of diffusion models is simplicity.
You define a probabilistic chain that gradually "noise" the input image until only white noise remains.
Then, generation is done by learning to reverse this chain. In many cases, the two directions have similar form.
/n
The starting point for diffusion models is probably "Deep Unsupervised Learning using Nonequilibrium Thermodynamics" by @jaschasd Weiss @niru_m@SuryaGanguli
*LocoProp: Enhancing BackProp via Local Loss Optimization*
by @esiamid@_arohan_ & Warmuth
Interesting approach to bridge the gap between first-order, second-order, and "local" optimization approaches. 👇
/n
The key idea is to use a single GD step to define auxiliary local targets for each layer, either at the level of pre- or post-activations.
Then, optimization is done by solving local "matching" problems wrt these new variables.
/n
What is intriguing is that the framework interpolates between multiple scenarios: first solution step is the original GD, while closed-form solution (in one case) is similar to a pre-conditioned GD model. Optimization is "local" in the sense that it decouples across layers.
Graph networks are limited to pairwise interactions. How to include higher-order components?
Read more below 👇 /n
The paper considers simplicial complexes, nice mathematical objects where having a certain component (e.g., a 3-way interaction in the graph) means also having all the lower level interactions (e.g., all pairwise interactions between the 3 objects). /n
Simplicial complexes have many notions of "adjacency" (four in total), considering lower- and upper- interactions.
They first propose an extension of the Weisfeiler-Lehman test that includes all four of them, showing it is slightly more powerful than standard WL. /n
A new, cool architecture that mixes several ideas from MLPs, CNNs, ViTs, trying to keep it as simple as possible.
Small thread below. 👇 /n
The idea is strikingly simple:
(i) transform an image into a sequence of patches;
(ii) apply in alternating fashion an MLP on each patch, and on each feature wrt all patches.
Mathematically, it is equivalent to applying an MLP on rows and columns of the matrix of patches. /n
There has been some discussion (and memes!) sparked from this tweet by @ylecun, because several components can be interpreted (or implemented) using convolutive layers (eg, 1x1 convolutions).
So, not a CNN, but definitely not a "simple MLP" either. /n