A new method to sample structured objects (eg, graphs, sets) with a formulation inspired to the state space of reinforcement learning.
I have collected a few key ideas and pointers below if you are interested. 👀
1/n
👇
*Flow Network based Generative Models for Non-Iterative Diverse Candidate Generation* #NeurIPS paper by @folinoid@JainMoksh et al. introducing the method.
The task is learning to sample objects that can be built 1 piece at a time ("lego-style").
For example: a complex molecule can be built by adding one atom at a time; an image by colouring one pixel per iteration; etc.
If you formalize this process, you get a state space where you move from an "empty" object to a complete object by traversing a graph.
3/n
The only thing you have is a reward function describing how good each object (eg, protein) is.
GFlowNets understand this reward as a flow of water running through the graph: the flow you get at the terminal nodes is the reward of the corresponding object.
4/n
Under this interpretation, you train a neural network to predict how the flow goes through the graph, by imposing that the incoming and outgoing flows at each node are conserved.
With this, you get one consistency equation per node that you can enforce with a loss function.
5/n
The network trained in this way (GFlowNet) is enough to solve your original problem: by traversing the graph with probabilities proportional to the flow, you sample objects proportionally to their reward!
*Trajectory Balance: Improved Credit Assignment in GFlowNets*
Building on it, @JainMoksh@folinoid@ChenSun92 et al. show a much better training criterion by sampling entire trajectories, making training significantly faster.
*GFlowNets for Discrete Probabilistic Modeling* @alex_volokhova
The basic GFlowNet assumes your reward function is given, but you can also train it jointly using ideas from energy-based modelling. In this work, they use it to generate images.
Yoshua Bengio wrote about GFlowNets: "I have rarely been as enthusiastic about a new research direction", that "creative juices are boiling", and about "bridging the gap between SOTA AI and human intelligence".
To a practical course, a practical exam: I asked each student to include a new branch in the repository showcasing additional tools and libraries.
The result? *Everyone* loves some hyper-parameter optimization. 😄
/n
Thanks to their work, you'll find practical examples of fine-tuning parameters using @OptunaAutoML, AX (from @facebookai), @raydistributed Tune, and Auto-PyTorch and Talos coming soon.
An emerging approach in generative modelling that is gathering more and more attention.
If you are interested, I collected some introductive material and thoughts in a small thread. 👇
Feel free to weigh in with additional material!
/n
An amazing property of diffusion models is simplicity.
You define a probabilistic chain that gradually "noise" the input image until only white noise remains.
Then, generation is done by learning to reverse this chain. In many cases, the two directions have similar form.
/n
The starting point for diffusion models is probably "Deep Unsupervised Learning using Nonequilibrium Thermodynamics" by @jaschasd Weiss @niru_m@SuryaGanguli
*LocoProp: Enhancing BackProp via Local Loss Optimization*
by @esiamid@_arohan_ & Warmuth
Interesting approach to bridge the gap between first-order, second-order, and "local" optimization approaches. 👇
/n
The key idea is to use a single GD step to define auxiliary local targets for each layer, either at the level of pre- or post-activations.
Then, optimization is done by solving local "matching" problems wrt these new variables.
/n
What is intriguing is that the framework interpolates between multiple scenarios: first solution step is the original GD, while closed-form solution (in one case) is similar to a pre-conditioned GD model. Optimization is "local" in the sense that it decouples across layers.