Diffusion models have connections to multiple types of generative models. The previous resources talk about the score-based model connection, this one connects diffusion models to VAEs.
As part of his amazing "Introduction to deep generative modeling" blog series, Dr. Jakub Tomczak provides a great intro and code examples of diffusion models.
After going through the introductory resources shared here, reading the original papers will be quite informative too!
The DDPM paper was the breakout diffusion model paper.
I have tried to include a diversity of resources that provide different perspectives on diffusion models. Hopefully it provides you different ways about thinking and learning about the topic!
If you like this thread, please share! 🙏
I am also working on my own blog post about diffusion models 👀
This a diffusion model pipeline that goes beyond what AlphaFold2 did: predicting the structures of protein-molecule complexes containing DNA, RNA, ions, etc.
Google announces Med-Gemini, a family of Gemini models fine-tuned for medical tasks! 🔬
Achieves SOTA on 10 of the 14 benchmarks, spanning text, multimodal & long-context applications.
Surpasses GPT-4 on all benchmarks!
This paper is super exciting, let's dive in ↓
The team developed a variety of model variants. First let's talk about the models they developed for language tasks.
The finetuning dataset is quite similar to Med-PaLM2, except with one major difference:
self-training with search
(2/14)
The goal is to improve clinical reasoning and ability to use search results.
Synthetic chain-of-thought w/ and w/o search results in context are generated, incorrect preds are filtered out, the model is trained on those CoT, and then the synthetic CoT is regenerated
Before I continue, I want to mention this work was led by @RiversHaveWings, @StefanABaumann, @Birchlabs. @DanielZKaplan, @EnricoShippole were also valuable contributors. (2/11)
High-resolution image synthesis w/ diffusion is difficult without using multi-stage models (ex: latent diffusion). It's even more difficult for diffusion transformers due to O(n^2) scaling. So we want an easily scalable transformer arch for high-res image synthesis. (3/11)