AI Pub Profile picture
Technical AI, explained. Get hired by the best AI companies:

Aug 21, 2022, 16 tweets

// Stable Diffusion, Explained //

You've seen the Stable Diffusion AI art all over Twitter.

But how does Stable Diffusion _work_?

A thread explaining diffusion models, latent space representations, and context injection:


First, a one-tweet summary of diffusion models (DMs).

Diffusion is the process of adding small, random noise to an image, repeatedly. (Left-to-right)

Diffusion models reverse this process, turning noise into images, bit-by-bit. (Right-to-left)

Photo credit: @AssemblyAI


How do DMs turn noise into images?

By training a neural network to do so gradually.

With the sequence of noised images = x_1, x_2, ... x_T,

The neural net learns a function f(x,t) that denoises x "a little bit", producing what x would look like at time step t-1.


To turn pure noise into an HD image, just apply f several times!

The output of a diffusion model really is just
f(f(f(f(....f(N, T), T-1), T-2)..., 2, 1)
where N is pure noise, and T is the number of diffusion steps.

The neural net f is typically implemented as a U-net.


The key idea behind Stable Diffusion:

Training and computing a diffusion model on large 512 x 512 images is _incredibly_ slow and expensive.

Instead, let's do the computation on _embeddings_ of images, rather than on images themselves.


So, Stable Diffusion works in two steps.

Step 1: Use an encoder to compress an image "x" into a lower-dimensional, latent-space representation "z(x)"

Step 2: run diffusion and denoising on z(x), rather than x.

Diagram below!


The latent space representation z(x) has much smaller dimension than the image x.

This makes the _latent_ diffusion model much faster and more expressive than an ordinary diffusion model.

See dimensions from the SD paper:


But where does the text prompt come in?

I lied! SD does NOT learn a function f(x,t) to denoise x a "little bit" back in time.

It actually learns a function f(x, t, y), with y the "context" to guide the denoising of x.

Below, y is the image label "arctic fox".


When using Stable Diffusion to make AI art, the "context" y is the text prompt you enter.

That's how the text prompt works.

(Image credit: @ari_seff's video ).


But how does SD process context?

The "context" y, alongside the time step t, can be injected into the latent space representation z(x) either by:

1) Simple concatenation
2) Cross-attention

Stable diffusion uses both.


The cool part not talked about on Twitter: the context mechanism is incredibly flexible.

Instead of y = an image label,

Let y = a masked image, or y = a scene segmentation.

SD trained on this different data, can now do image inpainting and semantic image synthesis!


(The above inpainting gif isn't from Stable Diffusion, FYI. Just an illustration of inpainting.)

Photos from the SD paper illustrating image inpainting and image synthesis, by changing the "context" representation y:


That's a wrap on Stable Diffusion! If you read the thread carefully, you understand:

1) The full SD architecture below
2) How SD uses latent space representations
3) how the text prompt is used as "context"
4) how changing the "context" repurposes SD to other tasks.


If this thread helped you learn about Stable Diffusion, likes, retweets, and follows are appreciated!

In addition to threads like this, I publish a "Best of AI Twitter" thread every week - last week's below.


PS, for more info check out the Stable Diffusion paper:


PSS this thread is pretty technical!

Check out these two videos if you want to understand Stable Diffusion at a higher-level/ in less technical format:


Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling