Score-based generative models are implicit optimal transport models; lifting them to accept fully nonlinear diffusion yields Schrödinger Bridge generative models.
Check out our latest work on log-likelihood training of Schrödinger Bridge 🌉! arxiv.org/pdf/2110.11291…
(1/3)
We show that Forward Backward SDE Theory connects the optimality of Schrödinger Bridge with the log-likelihood of score-based models. This generalizes the recent results from @YSongStanford (NeurIPS 2021 spotlight) with fully nonlinear data-to-noise diffusion SDEs!
(2/3)
Our model is also the #first optimal transport model to achieve comparable results (2.98 bits/dim and 3.18 FID) to score and flow-based models in Cifar10 generation, outperforming many other other optimal transport alternatives!
(3/3)
• • •
Missing some Tweet in this thread? You can try to
force a refresh