Started with a landscape in #midjourney. Used a depth map in @sidefx #houdini to create a scene mesh. Imported to #UnrealEngine5 @UnrealEngine to add grass and animate. Added music. Breakdown thread 1/8 #aiartprocess #MachineLearning #deeplearning #aiartcommunity #genart #aiia
I could have created a similar scene in just Unreal Engine and Quixel but I wanted to see what I could do with this landscape image I generated in #midjouney 2/8 #aiartprocess
I'm also trying to do more collaborations with other AI Artists so I used this as an excuse to research depth maps further and see how I could push them. I generated this LeRes depth map using "Boosting Monocular Depth Estimation to High Resolution" on Github. 3/8 #aiartprocess
I applied the depth map as a displacement map on a high-poly plane in @sidefx #houdini. I like Houdini because of the way it helps me deconstruct process with nodes and code and I can experiment with workflows easily. Rendered this natively in Mantra. 4/8 #aiartprocess
I then took the scene mesh and imported it as a .obj into #UE5. I wanted to apply grass with the footage tool directly onto the mesh, but that didn't work. So I sculpted a plane in the shape of the foreground terrain using Unreal's native modelling tools. 5/8 #aiartprocess
I then used the foliage brush to add grass to the plane. There's lots of YouTube tutorials like this one that explain how. 6/8 #aiartprocess
I applied a Third-Person game template in #UnrealEngine5 and did a screen recording of a character I controlled running around the scene in real-time. I also animated a camera and rendered a cinematic with Movie Render Queue. (1st post.) 7/8 #aiartprocess
There's clearly some rough edges to this project, but it's basically an exploration of different ways a 2D image might be coverted into 3D with depth maps. I have some other ideas I'm going try soon. If you have a cool process involving that please share! 8/8 #aiartprocess

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with CoffeeVectors

CoffeeVectors Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @CoffeeVectors

Oct 6
Created with #deforum #stablediffusion from a walking animation I made in #UnrealEngine5 with #realtime clothing and hair on a #daz model. Combined the passes in Premiere. Breakdown thread 1/8 @UnrealEngine @daz3d #aiart @deforum_art #MachineLearning #aiartcommunity #aiartprocess
I started with this animation I rendered in #UE5 @UnrealEngine using a @daz3d model with #realtime clothing using the @triMirror plugin. Walk was from the #daz store. Hair was from the Epic Marketplace. Went for a floaty, underwater vibe with the hair settings. 2/8 #aiartprocess
Used the #UE5 video as input into #deforum #stablediffusion. Adjusted the settings to keep the results very close to the input frames. @deforum_art 3/8 #aiartprocess
Read 8 tweets
Oct 3
One thing I learned from working in fashion—what’s considered “good” changes from region to region, country to country. From working in video and photo, “good” changed depending on if the person worked mostly with photographers or filmmakers. 1/10
Then there’s differences in genre, branding strategy, or individual ppl. There’s overlap of course, what we might consider the transcendant or timeless qualities of certain aesthetics, but the differences in taste always had relevant impact on budgets, rates, and networking. 2/10
So when I think about being multi-platform with social media, I’m actually thinking about mental health. When you think there’s only one market, one audience, one unified concept of taste, when posts don’t do well it’s easy to catastrophize. 3/10
Read 10 tweets
Sep 12
Took a face made in #stablediffusion driven by a video of a #metahuman in #UnrealEngine5 and animated it using Thin-Plate Spline Motion Model & GFPGAN for face fix/upscale. Breakdown follows:1/9 #aiart #ai #aiArtist #MachineLearning #deeplearning #aiartcommunity #aivideo #aifilm
First, here's the original video in #UE5. The base model was actually from @daz3d #daz and I used Unreal's #meshtometahuman tool to make a #metahuman version. 2/9 #aiartprocess
Then I took a single still frame from that video and ran it through #img2img in a local instance of #stablediffusion WebUI. After generating a few options I ended up with this image. 3/9 #aiartprocess Image
Read 9 tweets
Sep 11
If you have a style or consistent aesthetic you’re going for with AI art, it might be a good idea to figure out how to get to that look with different tools, starting points, and process pathways. 1/8
We already see how fast these tools are changing. And as we iterate, as the tool space evolves, we can’t know how backwards compatible changes will be. Maybe we have more control with local SD, but maybe 2/8
parts of your future process will take on powerful pieces that are app based or otherwise in a blackbox. You might ask yourself, how much of my style is locked in a specific process/tool combination? How fragile is that to change? To parent companies pivoting? 3/8
Read 8 tweets
Sep 7
Using X/Y Plot in a local #stablediffusion WebUI to create contact sheets exploring the latent space for my previous post. As a photo/video person I'm trying to bring the #aiartprocess closer to the workflows I use with clients.1/7 #aiphotography #aifashion #aiart #aiartcommunity
I imagine if you're coming to AI art from a different set of creative fields and client expectations, you'd approach the latent space from a different collection of frameworks and processes. Very curious how different a UI/UX catering to painters could look. 2/7 #aiartprocess
After an exhaustive process of prompting, I use different combos of X/Y Plot to make variations. The image I posted looked at sampler method+step count. Still trying to figure out what flow makes sense for me. I'm sure I'll throw it out as soon as new stuff gets released lol 3/7
Read 7 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(