#stablediffusion is a #LatentDiffusionModel and performs its generative tasks efficiently on low-dimensional representations of high-dimensional training inputs. SD's VAE latent space preserves relevant information contained in CXR; they can be reconstructed with high fidelity.
#StableDiffusion’s output can be controlled at inference time by using text prompts, but it is unclear how much medical imaging concepts SD incorporates. Simple text prompts show how hard it can be to get realistic-looking medical images out-of-the-box without specific training.
If SD’s frozen #CLIP text encoder does not include enough medical concepts to work with radiology prompts, how about switching it with a domain-specific one like PubmedBERT and projecting the embeddings? The results did not resemble CXR visually or quantitatively. Yikes!
How about teaching the model new concepts? Using #TextualInversion (@RinonGal et al.,2022), we introduce new tokens like <lungxray> and a small set of CXR. The results are visually and quantitatively better, but still far from the medical reality. ("a photo of a <lungxray>")
Fine-tuning the U-Net of the #StableDiffusion pipeline with a semantic prior as proposed by @natanielruizg et al. 2022 finally resulted in visually convincing chest x-rays with visible lack or presence of pleural effusion, depending on the text prompt.
Eventually we wanted to see how well a pretrained classification model trained on real CXR can do on generated data: DenseNet-121 was able to predict pleural effusions with an accuracy of 95% in the synthetic samples created with our best-looking approach.
Our work highlights the power of pretrained large multi-modal models like #StableDiffusion and gives a glimpse of how much there is to explore for the medical imaging domain! Can’t wait to test this on other modalities and pathologies to increase the diversity of the output.
Worrying if my job was jeopardized by AI this week or if we’re still good, I read a new paper evaluating #GPT4V - a #GPT4 version handling image and text inputs. It produces *impressive* radiology reports. But let’s delve deeper into some of the results... #radiology #AI
Here, GPT4V correctly identified a fracture of the 5th metatarsal bone. However, this is not a Jones fracture (which is in the proximal part of the bone and sometimes doesn’t heal well, requiring more aggressive management). Almost correct ≠ Correct, esp. in medicine.
Here, the model correctly identified a suspicious pulmonary nodule but incorrectly described its location and explicitly hallucinated its size. Additionally, it inferred a lack of pathologically enlarged lymph nodes, which is impossible to determine from just one slice.
🎉Introducing RoentGen, a generative vision-language foundation model based on #StableDiffusion, fine-tuned on a large chest x-ray and radiology report dataset, and controllable through text prompts!
#RoentGen is able to generate a wide variety of radiological chest x-ray (CXR) findings with fidelity and high level of detail. Of note, this is without being explicitly trained on class labels.
Built on previous work, #RoentGen is a fine-tuned latent diffusion model based on #StableDiffusion. Free-form medical text prompts are used to condition a denoising process, resulting in high-fidelity yet diverse CXR, improving on a typical limitation of GAN-based methods.