I started with this animation I rendered in #UE5@UnrealEngine using a @daz3d model with #realtime clothing using the @triMirror plugin. Walk was from the #daz store. Hair was from the Epic Marketplace. Went for a floaty, underwater vibe with the hair settings. 2/8 #aiartprocess
I felt the #deforum animation lost some of the body and fluidity in the hair, and I thought the hand artifacts were too much so I decided to layer and blend the Deforum video ontop of the original #ue5 render in Premiere Pro. 4/8 #aiartprocess
I also added a blurred layer for a slight glowing effect on the skin.
I liked the mix of fluidity and flickering in the aesthetic. I also decided not to fix the artifacts at the top of the head. Something about the imperfection was interesting to me. 5/8 #aiartprocess
I then upscaled the video to 4K with @topazlabs Video Enhance AI. With that I could animate camera moves in a 1080 frame in Premiere Pro. 6/8 #aiartprocess
I generated 3 img2img backgrounds in #stablediffusion I took CC0 images that I found online, edited them in Photoshop to get roughly matching horizon levels, and used them as init images. Then upscaled to 4K with @topazlabs 7/8 #aiartprocess
I animated and layered the background plates together in Premiere. Keyed out the blue background in the walk video and laid it on top. Then made slight adjustments in color/contrast as she walked through the diff. environments. 8/8 #aiartprocess
• • •
Missing some Tweet in this thread? You can try to
force a refresh
One thing I learned from working in fashion—what’s considered “good” changes from region to region, country to country. From working in video and photo, “good” changed depending on if the person worked mostly with photographers or filmmakers. 1/10
Then there’s differences in genre, branding strategy, or individual ppl. There’s overlap of course, what we might consider the transcendant or timeless qualities of certain aesthetics, but the differences in taste always had relevant impact on budgets, rates, and networking. 2/10
So when I think about being multi-platform with social media, I’m actually thinking about mental health. When you think there’s only one market, one audience, one unified concept of taste, when posts don’t do well it’s easy to catastrophize. 3/10
Then I took a single still frame from that video and ran it through #img2img in a local instance of #stablediffusion WebUI. After generating a few options I ended up with this image. 3/9 #aiartprocess
If you have a style or consistent aesthetic you’re going for with AI art, it might be a good idea to figure out how to get to that look with different tools, starting points, and process pathways. 1/8
We already see how fast these tools are changing. And as we iterate, as the tool space evolves, we can’t know how backwards compatible changes will be. Maybe we have more control with local SD, but maybe 2/8
parts of your future process will take on powerful pieces that are app based or otherwise in a blackbox. You might ask yourself, how much of my style is locked in a specific process/tool combination? How fragile is that to change? To parent companies pivoting? 3/8
I imagine if you're coming to AI art from a different set of creative fields and client expectations, you'd approach the latent space from a different collection of frameworks and processes. Very curious how different a UI/UX catering to painters could look. 2/7 #aiartprocess
After an exhaustive process of prompting, I use different combos of X/Y Plot to make variations. The image I posted looked at sampler method+step count. Still trying to figure out what flow makes sense for me. I'm sure I'll throw it out as soon as new stuff gets released lol 3/7