InstructPix2Pix: Edit an image using text guidance using a single forward pass. Why use any inversion or other stuff,just create a dataset using inversion techniques and train a new model.

A 🧶

Paper: arxiv.org/abs/2211.09800

Day 8 #30daysofDiffusion #Diffusion #MachineLearning Image
It should be fast when you want to edit an image in real-time. Models like textual inversion or prompt-to-prompt optimize during inference which makes them slow.
In this paper, the authors cleverly use such techniques to generate the training data and then finetune Stable Diffusion to perform edits in a single forward pass. They use 2 pretrained models, GPT-3 Davinci model and the SD model to generate the data.
What's the need for GPT you might wonder! It is hard to generate a lot of edit instructions manually, so the authors create a small dataset first (700) and fine-tune GPT on it which in turn generates large-scale(>400k) "plausible" edits to the captions. Image
Then the authors take the edited caption and run it by Prompt-to-prompt model to generate the modified images. Check out this thread to see how prompt-to-prompt works.
The data generation pipeline is shown below. Image
Now we have a dataset of {instruction, image, edited image }. The authors fine-tune the SD model with slight modifications to the encoder module to condition on the original image. The loss objective is shown below. c_i is the original image and c_T is the edit instruction. Image
Another important point is, SD uses classifier-free guidance, and we want to control how much "guidance" we want in a generation. In this model, there is text as well as image guidance, hence the updated score looks like the following. Image
s_i controls similarity with the input image, while s_t controls consistency with the edit instruction. We can see the effect of both these hyperparameters in the figure below. Image
I love this example. Here they show different variants of Abbey Road album covers. Image
Failure cases: The model is as good as the training data and it carries forward the issues of GPT-3 and Prompt-to-prompt editing. Especially with spatial reasoning stuff (like moving something from left to right etc). Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Gowthami Somepalli

Gowthami Somepalli Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @gowthami_s

Jan 11
Retrieval Augmented #Diffusion (RDM) models: Smaller diffusion models can generate high-quality generations by accessing an external memory to guide the generation. Inspired by Deepmind's RETRO.

A 🧶

Paper: arxiv.org/abs/2204.11824

Day 10 #30daysofDiffusion #MachineLearning Image
If the model can rely on this external memory always, it just has to learn important details about the image generation process such as the composition of scenes rather than, for example, remembering how different dogs look like.
Setting: X is the training set and D is a *disjoint* image set which is used for retrieval. θ denotes the parameters of the diffusion model. ξ is the retrieval function which takes in an image and selects "k" images from D. φ is a pretrained image encoder.
Read 18 tweets
Jan 10
StructureDiffusion: Improve the compositional generation capabilities of text-to-image #diffusion models by modifying the text guidance by using a constituency tree or a scene graph.

A 🧵

Paper: arxiv.org/abs/2212.05032

Day 9 #30daysofDiffusion #MachineLearning Image
T2I models like SD produce great aesthetically pleasing generations for a given prompt, however, most of us never get them right on the first try. Sometimes the model ignores part of the prompt and some objects we want in the picture are missing.
Also sometimes the model gets adjectives mixed up. For example, in the figure below, the prompt is - "red car and white sheep". However, the model produced a red sheep too!

The authors address this compositionality issue in this paper. Image
Read 13 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(