π₯πNew paper! Check out how we use diffusion models (same type of model used in #dalle2 and #Imagen ) to perform the detection of anomalies in the brain!
In tinyurl.com/5eu59826, we found that using transformers for anomaly detection has a few limitations, e.g. limited context and exposure bias. Besides that, the segmentation of anomalies with transformers can take a long time to be performed, limiting its application.
3/n
Based on latent diffusion models (arxiv.org/abs/2112.10752 developed by @robrombach and @andi_blatt ), we create models that locate image regions with a high probability of being anomalous. Then, we inpaint these regions using the model knowledge about healthy data. 4/n
Using the difference between the inputted image and the inpainted one, we obtain competitive performance in synthetic anomalies as well in real MRI and CT images. With diffusion models and recent non-Markovian samplers, we speed up our method to just a few seconds! 5/n
Looking forward to sharing more details about our models on 3D anomaly detection and generative modelling experiments soon!
n/n
β’ β’ β’
Missing some Tweet in this thread? You can try to
force a refresh
πNew preprint! We used Diffusion Models (same one from #dalle2 and @StableDiffusion ) to generate 3D MRI images of the brain conditioned on several covariates and make 100k synthetic brains openly available
Paper: arxiv.org/abs/2209.07162
Dataset: tinyurl.com/32p4hu7d 1/n
We adapted latent diffusion models (arxiv.org/abs/2112.10752 developed by @robrombach and @andi_blatt) to learn to generate 3D high-resolution medical images. Due to its scalability, we were to train these models on images with millions of voxels!