Zineng Tang Profile picture
May 22 10 tweets 6 min read Twitter logo Read on Twitter
🖼️🎞️🔊📄Excited to introduce Composable Diffusion (CoDi), a new generative-AI foundation model that can take any combo of input modalities & generate any combo of output modalities (text, audio, image, video)!
codi-gen.github.io
@yzy_ai @ChenguangZhu2 @mohitban47 🧵👇
#CoDi
Many existing models are restricted to generating one modality from another, like text-to-image, text-to-audio, or audio-to-image. On the other hand, CoDi can generate multiple modalities in parallel and its input is not limited to a subset of modalities like text or image.
Training such a model presents significant costs, as the # combinations for input and output modalities scales exponentially & training datasets are missing for many combinations of modalities. We propose “Bridging Alignment” strategy to efficiently model the exponential number…
…of input-output combinations with a linear number of training objectives. This also allows CoDi to freely condition on any input combination and generate any group of modalities, even if they are not present in the training data. Image
We build CoDi in 2 stages. First, we train a latent diffusion model (LDM) for each modality. They can be trained independently, ensuring high-quality generation for each modality. For conditional generation, e.g., audio+language→images, the input modalities are projected into…
…a shared feature space, and the output LDM attends to the combination of input features. This multimodal conditioning mechanism prepares the diffusion model to condition on any modality or combination of modalities without directly training for such settings.
2nd stage: we add a cross-attention module to each LDM & an environment encoder to project the latent variable of LDMs into a shared space. This enables CoDi to seamlessly generate any group of modalities, w/o training on all generation combos (with linear # training objectives). ImageImageImage
We show many demos including single-to-single, multi-to-single, and multi-to-multi modality generation and our model’s ability to generate high fidelity and aligned examples across, video, image, text, to audio. ImageImageImageImage
Also see the awesome demo walkthrough by @altryne

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Zineng Tang

Zineng Tang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(