Dmitrii Tochilkin Profile picture
3D research @StabilityAI

Aug 21, 2022, 8 tweets

Inpainting mode in #DiscoDiffusion!
I've finally made the parametrised guided inpainting for disco, and applied it for more stable 2D and 3D animations. In the thread i show what's in there

colab.research.google.com/github/kostari…

Inpainting can be used to repaint unwanted parts of the still image using binary mask. Mask can be drawn inside colab if you check 'draw_mask' flag, or by specifying the path.

I've also applied inpainting to the 2D and 3D animation processes to make them more stable. Previously we've had a tradeoff: either a stable animation with high skip_steps (but you get trails and poor details in the regions that were missing in the previous frame)

...or low skip_steps but then the animation is very shaky. Instead, we can dynamically calculate the missing part for the current frame camera movement and ✨inpaint✨only that. So, you get stable animation AND good details. 2D example:

It works for 3D animations also!

Actually, it can also be used for Video Source animation to make stable&detailed warps (by inpainting only occluded areas). But my current implementation works on top of @devdef's consistency checks from the #warpfusion patreon colab, so probably it will remain closed for now

How Disco Diffusion inpainting works: it messes up the diffusion process, enforcing the pixels outside of the mask to match the original image mixed with the amount of noise that corresponds to the current diffusion step.

Also, there is an option to add blur to the mask edges for smooth blending. The whiter the mask, the more diffusion steps is applied to this area. On the 'skip_steps' step # we start inverse-diffussing the area where mask=1, and then continuously expand for less bright areas

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling