toyxyz Profile picture
Feb 15, 2023 6 tweets 3 min read Read on X
ControlNet gif2gif test #stablediffusion #AIイラスト #hed2image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with toyxyz

toyxyz Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @toyxyz3

Dec 25, 2023
SparseCtrl is a feature added in AnimateDiff v3 that is useful for creating natural motion with a small number of inputs. Let's take a quick look at how to use it! Image
There are currently two types: RGB and Scribble. If you use a single image, it works like img2vid.
The workflow for SparseCtrl RGB is as follows. For Scribble, change the preprocessor to scribble or line art. Image
Read 12 tweets
Nov 17, 2023
Let's take a simple tutorial about AnimateDiff openpose+inpainting workflow! #AnimateDiff #stablediffusion #AIイラスト #AI #ComfyUI
The workflow and sample video can be downloaded here. drive.google.com/drive/folders/…
The structure is very simple. I used Openpose to recognize the pose of the person and inpainting to replace the person with a new character image. Image
Read 17 tweets
Feb 17, 2023
#02_ First we need to render the background and character Openpose bones separately. We will use them for Depth2image and Pose2image respectively. I used Blender.
#03_ The reason for using the rendered depth rather than the automatically generated depth is that the sharper the boundary of the depth, the better the detail and sharpness of the image. Top: Rendered depth / Bottom: Auto-generated depth
Read 11 tweets
Feb 16, 2023
#01_ ControlNet Mocap2Image Workflow Quick Tutorial
#02_ The only image the Openpose model needs is an openpose skeleton. The use of images of real people is due to the Gradio UI, according to the developer's comments.
#03_ So I used Blender to create a simple character that looks like an Openpose skeleton. You can download it for free here. :toyxyz.gumroad.com/l/ciojz
Read 11 tweets
Feb 15, 2023
ControlNet Mocap2Image test #stablediffusion #AIイラスト #pose2image
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(