Used #stablediffusion2 #depth2img model to render a more photoreal layer ontop of a walking animation I made in #UnrealEngine5 with #realtime clothing and hair on a #daz model. Breakdown thread 1/6 @UnrealEngine @daz3d #aiart #MachineLearning #aiartcommunity #aiartprocess #aiia
Reused this animation I made from a post from a few months ago. Rendered in #UE5 @UnrealEngine using a @daz3d model with #realtime clothing in the @triMirror plugin. Walk was from the #daz store. Hair was from the Epic Marketplace. 2/6 #aiartprocess
Used SD2’s #depth2img model running locally in Automatic1111. Thanks to @TomLikesRobots for the help getting it working! And showing how the model retains more consistency than normal img2img. I basically did an img2img batch process on the image sequence. 3/6 #aiartprocess
With the #depth2img model I’m able to use a low denoising strength of 0.25 but still get a noticeable change in aesthetics to photorealism. There’s still flickering in the anatomy but it’s much better than before. 4/6 #aiartprocess
Finally I added some film grain in Premiere Pro and simulated the retro glow from a Black Pro-mist filter. I thought it would be interesting to see how it might add to the realism. 5/6 #aiartprocess
While you can still see instability in shoulders and collarbones, I’m really excited to see that we’re not that far from creating photorealistic video ontop of a 3D scaffold rendered in #UnrealEngine5. We just need more consistency from frame to frame. 6/6 #aiartprocess

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with CoffeeVectors

CoffeeVectors Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @CoffeeVectors

Dec 17
A thought on resistance to change. I recently had a convo with a friend of mine who went thru a serious breakup that left her rattled. She talked about how hard it was to let go of the future she had envisioned for herself; that she felt so sure was going to come. 1/7
I feel like part of the resistance to change isn’t just rooted in the past and present, but also your perception of how you thought the world was going to look like and your place in it. Expectations are set and not met. 2/7
It’s like trying to turn a race car, the more momentum, the more energy it will take to change course. It’s not true in all cases, but when it is true, it can be an incredible struggle. The weight of disappointment can be a terrible burden. 3/7
Read 7 tweets
Dec 15
I’m so fascinated by how much of understanding a concept can sometimes just be a language issue. Being able to ask #chatgpt to summarize, expand, rephrase and format explanations in different ways is so refreshing. 1/7
Like here’s #ChatGPT explaining how to cook a steak in pseudo code format. 2/7 ImageImage
Here I asked #ChatGPT to explain how version control works in @github but in the context of an anime scene from My Hero Academia. 3/7 Image
Read 7 tweets
Oct 14
Started with a landscape in #midjourney. Used a depth map in @sidefx #houdini to create a scene mesh. Imported to #UnrealEngine5 @UnrealEngine to add grass and animate. Added music. Breakdown thread 1/8 #aiartprocess #MachineLearning #deeplearning #aiartcommunity #genart #aiia
I could have created a similar scene in just Unreal Engine and Quixel but I wanted to see what I could do with this landscape image I generated in #midjouney 2/8 #aiartprocess
I'm also trying to do more collaborations with other AI Artists so I used this as an excuse to research depth maps further and see how I could push them. I generated this LeRes depth map using "Boosting Monocular Depth Estimation to High Resolution" on Github. 3/8 #aiartprocess
Read 8 tweets
Oct 6
Created with #deforum #stablediffusion from a walking animation I made in #UnrealEngine5 with #realtime clothing and hair on a #daz model. Combined the passes in Premiere. Breakdown thread 1/8 @UnrealEngine @daz3d #aiart @deforum_art #MachineLearning #aiartcommunity #aiartprocess
I started with this animation I rendered in #UE5 @UnrealEngine using a @daz3d model with #realtime clothing using the @triMirror plugin. Walk was from the #daz store. Hair was from the Epic Marketplace. Went for a floaty, underwater vibe with the hair settings. 2/8 #aiartprocess
Used the #UE5 video as input into #deforum #stablediffusion. Adjusted the settings to keep the results very close to the input frames. @deforum_art 3/8 #aiartprocess
Read 8 tweets
Oct 3
One thing I learned from working in fashion—what’s considered “good” changes from region to region, country to country. From working in video and photo, “good” changed depending on if the person worked mostly with photographers or filmmakers. 1/10
Then there’s differences in genre, branding strategy, or individual ppl. There’s overlap of course, what we might consider the transcendant or timeless qualities of certain aesthetics, but the differences in taste always had relevant impact on budgets, rates, and networking. 2/10
So when I think about being multi-platform with social media, I’m actually thinking about mental health. When you think there’s only one market, one audience, one unified concept of taste, when posts don’t do well it’s easy to catastrophize. 3/10
Read 10 tweets
Sep 12
Took a face made in #stablediffusion driven by a video of a #metahuman in #UnrealEngine5 and animated it using Thin-Plate Spline Motion Model & GFPGAN for face fix/upscale. Breakdown follows:1/9 #aiart #ai #aiArtist #MachineLearning #deeplearning #aiartcommunity #aivideo #aifilm
First, here's the original video in #UE5. The base model was actually from @daz3d #daz and I used Unreal's #meshtometahuman tool to make a #metahuman version. 2/9 #aiartprocess
Then I took a single still frame from that video and ran it through #img2img in a local instance of #stablediffusion WebUI. After generating a few options I ended up with this image. 3/9 #aiartprocess Image
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(