I could have created a similar scene in just Unreal Engine and Quixel but I wanted to see what I could do with this landscape image I generated in #midjouney 2/8 #aiartprocess
I'm also trying to do more collaborations with other AI Artists so I used this as an excuse to research depth maps further and see how I could push them. I generated this LeRes depth map using "Boosting Monocular Depth Estimation to High Resolution" on Github. 3/8 #aiartprocess
I applied the depth map as a displacement map on a high-poly plane in @sidefx#houdini. I like Houdini because of the way it helps me deconstruct process with nodes and code and I can experiment with workflows easily. Rendered this natively in Mantra. 4/8 #aiartprocess
I then took the scene mesh and imported it as a .obj into #UE5. I wanted to apply grass with the footage tool directly onto the mesh, but that didn't work. So I sculpted a plane in the shape of the foreground terrain using Unreal's native modelling tools. 5/8 #aiartprocess
I then used the foliage brush to add grass to the plane. There's lots of YouTube tutorials like this one that explain how. 6/8 #aiartprocess
I applied a Third-Person game template in #UnrealEngine5 and did a screen recording of a character I controlled running around the scene in real-time. I also animated a camera and rendered a cinematic with Movie Render Queue. (1st post.) 7/8 #aiartprocess
There's clearly some rough edges to this project, but it's basically an exploration of different ways a 2D image might be coverted into 3D with depth maps. I have some other ideas I'm going try soon. If you have a cool process involving that please share! 8/8 #aiartprocess
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I started with this animation I rendered in #UE5@UnrealEngine using a @daz3d model with #realtime clothing using the @triMirror plugin. Walk was from the #daz store. Hair was from the Epic Marketplace. Went for a floaty, underwater vibe with the hair settings. 2/8 #aiartprocess
One thing I learned from working in fashion—what’s considered “good” changes from region to region, country to country. From working in video and photo, “good” changed depending on if the person worked mostly with photographers or filmmakers. 1/10
Then there’s differences in genre, branding strategy, or individual ppl. There’s overlap of course, what we might consider the transcendant or timeless qualities of certain aesthetics, but the differences in taste always had relevant impact on budgets, rates, and networking. 2/10
So when I think about being multi-platform with social media, I’m actually thinking about mental health. When you think there’s only one market, one audience, one unified concept of taste, when posts don’t do well it’s easy to catastrophize. 3/10
Then I took a single still frame from that video and ran it through #img2img in a local instance of #stablediffusion WebUI. After generating a few options I ended up with this image. 3/9 #aiartprocess
If you have a style or consistent aesthetic you’re going for with AI art, it might be a good idea to figure out how to get to that look with different tools, starting points, and process pathways. 1/8
We already see how fast these tools are changing. And as we iterate, as the tool space evolves, we can’t know how backwards compatible changes will be. Maybe we have more control with local SD, but maybe 2/8
parts of your future process will take on powerful pieces that are app based or otherwise in a blackbox. You might ask yourself, how much of my style is locked in a specific process/tool combination? How fragile is that to change? To parent companies pivoting? 3/8
I imagine if you're coming to AI art from a different set of creative fields and client expectations, you'd approach the latent space from a different collection of frameworks and processes. Very curious how different a UI/UX catering to painters could look. 2/7 #aiartprocess
After an exhaustive process of prompting, I use different combos of X/Y Plot to make variations. The image I posted looked at sampler method+step count. Still trying to figure out what flow makes sense for me. I'm sure I'll throw it out as soon as new stuff gets released lol 3/7