Let's take a simple tutorial about AnimateDiff openpose+inpainting workflow! #AnimateDiff #stablediffusion #AIイラスト #AI #ComfyUI
The workflow and sample video can be downloaded here. drive.google.com/drive/folders/…
The structure is very simple. I used Openpose to recognize the pose of the person and inpainting to replace the person with a new character image.
The overall workflow looks like this
I use the detector from the Impact pack to automatically generate a mask in the area where the person is located, and then use KJnodes' grow mask to expand the mask slightly.
1. open pose, 2. inpaint mask, 3. openpose controlnet mask(This prevents lumps from appearing around the character when using openpose)
Using an IP-adapter can improve consistency. However, if the weight is too high, it can affect poses and backgrounds.
I recommend resizing the video to 512 size. It's also a good idea to test using just 16 frames first before processing the entire video.
In addition to Openpose, you can use Depth or Canny to improve pose precision. However, if the weight is too high, the shape will be affected.
The next part is Facedetailer. Crop only the face from the image to enhance the detail and uncrop it.
Workflow. This time, I'm mainly using KJnode's Batch Crop&uncrop node.
You can use Crop_size_mult to adjust the area to be cropped. And choose the appropriate size to upscale to.
You can also use Detailer for AD, which is included with Impactpack.
ImpactPack is simple but not customizable, and KJnodes is the opposite. You can choose whatever you're comfortable with.
Share this Scrolly Tale with your friends.
A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.