Discover and read the best of Twitter Threads about #AIイラスト

Most recents (6)

AIを用いたアニメーションのテスト
Stable Diffusion( ebsynth utility, LoRA, ControlNet),VRoid, Unity, krita
#StableDiffusion #AIイラスト
VRoidでモデル制作→LoRAで学習→Unityで雑にアニメーションとカメラの設定、MP4で出力→ebsynth utilityでフレーム切り出しとマスク→キーフレーム抽出→Loopback4~5、Face CropのDetection MethodはYolov5(起動オプション書き換えないといけないので注意)→ kritaで毎フレーム手動でノイズ除去、連結 ImageImage
© Unity Technologies Japan/UCL
ebsynth utility ↓
github.com/s9roll7/ebsynt…
Read 4 tweets
「monochrome line art」というプロンプトのテスト。モデルや呪文を練れば、もっとちゃんと「マンガの作画」っぽくできそう。
#AIart #AIイラスト #AI漫画 #StableDiffusion #AI呪文研究部 Prompt: (a school_girl)++, school_uniform, big chest, short Prompt: (a school_girl)++, school_uniform, big chest, short Prompt: (a school_girl)++, school_uniform, big chest, short Prompt: (a school_girl)++, school_uniform, big chest, short
でも、せっかく「フルカラーのハイカロリーな作画を量産できる」というAIの強みがあるのに、それをわざわざ捨てて「人間の作画」に似せる必要性 is どこ?
ついでに、フルカラーイラストから線画を綺麗に抽出する技術がすでにあるので、無理にグレスケ絵をSDに吐かせる必要はない…
Read 4 tweets
#02_ First we need to render the background and character Openpose bones separately. We will use them for Depth2image and Pose2image respectively. I used Blender.
#03_ The reason for using the rendered depth rather than the automatically generated depth is that the sharper the boundary of the depth, the better the detail and sharpness of the image. Top: Rendered depth / Bottom: Auto-generated depth
Read 11 tweets
ControlNet Mocap2Image test #stablediffusion #AIイラスト #pose2image
Read 4 tweets

Related hashtags

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!