toyxyz Profile picture
Dec 25, 2023 12 tweets 4 min read Read on X
SparseCtrl is a feature added in AnimateDiff v3 that is useful for creating natural motion with a small number of inputs. Let's take a quick look at how to use it! Image
There are currently two types: RGB and Scribble. If you use a single image, it works like img2vid.
The workflow for SparseCtrl RGB is as follows. For Scribble, change the preprocessor to scribble or line art. Image
You can also use multiple images to create a frame between them. In general, Scribble motion is bit more natural than RGB. You can also use them together at the same time.
If you want to use multiple images, you can enter them in a batch. For more than two images, it's convenient to use a custom node like VideoHelperSuite's Load Images. The image is applied evenly from the first to the last frame in order. Image
This can also be used as a kind of frame interpolation. However, the style will change depending on the model you use, so make sure your input images are generated with the same model to get the desired results.
You can use the sparse method to adjust how the input image is applied. Image
For example, Starting is 1 to 0, Ending is 0 to 1, and so on. Note the timing of when the pose in the video and input image become the same.
For the Index Method, you can specify a frame. If you enter 8, frame 8 will be most strongly affected. Image
Motion_strength/scale can be used to adjust the strength and intensity of the motion. Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with toyxyz

toyxyz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @toyxyz3

Nov 17, 2023
Let's take a simple tutorial about AnimateDiff openpose+inpainting workflow! #AnimateDiff #stablediffusion #AIイラスト #AI #ComfyUI
The workflow and sample video can be downloaded here. drive.google.com/drive/folders/…
The structure is very simple. I used Openpose to recognize the pose of the person and inpainting to replace the person with a new character image. Image
Read 17 tweets
Feb 17, 2023
#02_ First we need to render the background and character Openpose bones separately. We will use them for Depth2image and Pose2image respectively. I used Blender.
#03_ The reason for using the rendered depth rather than the automatically generated depth is that the sharper the boundary of the depth, the better the detail and sharpness of the image. Top: Rendered depth / Bottom: Auto-generated depth
Read 11 tweets
Feb 16, 2023
#01_ ControlNet Mocap2Image Workflow Quick Tutorial
#02_ The only image the Openpose model needs is an openpose skeleton. The use of images of real people is due to the Gradio UI, according to the developer's comments.
#03_ So I used Blender to create a simple character that looks like an Openpose skeleton. You can download it for free here. :toyxyz.gumroad.com/l/ciojz
Read 11 tweets
Feb 15, 2023
ControlNet Mocap2Image test #stablediffusion #AIイラスト #pose2image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(