🎉Introducing RoentGen, a generative vision-language foundation model based on #StableDiffusion, fine-tuned on a large chest x-ray and radiology report dataset, and controllable through text prompts!
#RoentGen is able to generate a wide variety of radiological chest x-ray (CXR) findings with fidelity and high level of detail. Of note, this is without being explicitly trained on class labels.
Built on previous work, #RoentGen is a fine-tuned latent diffusion model based on #StableDiffusion. Free-form medical text prompts are used to condition a denoising process, resulting in high-fidelity yet diverse CXR, improving on a typical limitation of GAN-based methods.
Context: Latent diffusion models like #StableDiffusion trained on large natural image-text datasets like @laion_ai’s #LAION-5B are able to generate highly realistic images controlled by text prompts, but their knowledge about specific domains like medical imaging is limited.
Few-shot fine-tuning of #StableDiffusion with a prior-preserving loss (#DreamBooth) previously allowed us to insert pathologies in generated CXR by text prompt, but the generated images show comparatively little diversity and are constrained to the classes used during training.
After scaling to tens of thousands of CXR image-report pairs, SD starts replacing previously learned concepts in favor of medical domain-specific concepts like radiographic abnormalities (e.g., pleural effusions) with increasing levels of correctness and new abilites.
#RoentGen developed the ability to control CXR appearance with appropriate medical terminology and concepts. Note how in the first image, the generated images are in line with the radiological convention of displaying the right patient side on the left side of the image.
Compared to previous work, the outputs show a high degree of diversity. Note the variable appearance of the right-sided pleural effusion with varying amounts of interlobar fluid (top row, white arrowheads) for “big right (left) sided pleural effusion with adjacent atelectasis”.
Why synthetic CXR? They can be used to improve downstream tasks! Fine-tuning RoentGen on fixed training data yields a 5% improvement of a classifier trained jointly on synthetic and real images, and a 3% improvement when trained on a larger but purely synthetic training set.
Outside of data augmentation, this high level of control over the generated output also opens up new ways for data sharing (sharing models instead of the data itself), education and could be used to mitigate data imbalances and biases.
To balance the benefits of open-source science with the challenges of improper use of generative models, we aim to ensure sharing weights in accordance with the data usage agreement of MIMIC-CXR.
Weights can be requested in a tiered release at: forms.gle/Ggu2Kbu2MjMjxw…
Worrying if my job was jeopardized by AI this week or if we’re still good, I read a new paper evaluating #GPT4V - a #GPT4 version handling image and text inputs. It produces *impressive* radiology reports. But let’s delve deeper into some of the results... #radiology #AI
Here, GPT4V correctly identified a fracture of the 5th metatarsal bone. However, this is not a Jones fracture (which is in the proximal part of the bone and sometimes doesn’t heal well, requiring more aggressive management). Almost correct ≠ Correct, esp. in medicine.
Here, the model correctly identified a suspicious pulmonary nodule but incorrectly described its location and explicitly hallucinated its size. Additionally, it inferred a lack of pathologically enlarged lymph nodes, which is impossible to determine from just one slice.
#stablediffusion is a #LatentDiffusionModel and performs its generative tasks efficiently on low-dimensional representations of high-dimensional training inputs. SD's VAE latent space preserves relevant information contained in CXR; they can be reconstructed with high fidelity.
#StableDiffusion’s output can be controlled at inference time by using text prompts, but it is unclear how much medical imaging concepts SD incorporates. Simple text prompts show how hard it can be to get realistic-looking medical images out-of-the-box without specific training.