I started by having #chatGPT write a few rough drafts of a scene involving a panicked character calling her friend for help from a spaceship. I was going for something that would involve heightened emotions but not be too serious. 2/8
Then I wrote a short script using some of those ideas plus my own and put the whole thing into @elevenlabsio. I generated a few takes using low Stability (1-2%) and high Clarity (90-99%). Each take usually had parts I liked, or at least gave me ideas for direction. 3/8
I stuck to one voice I liked for simplicity. Changing voices can sometimes dramatically alter the sound to where it almost feels like diff mics were used. I decided I'd just change the pitch of the voices in post to differentiate them more. 4/8
After doing a few takes of the whole script, I generated individual lines. There I'd experiment with the "prompt" to see if I could direct the acting more by adding ellipses, diff punctuation, line breaks, and misspellings. Here's a sample of my history. 5/8
Then I laid everything out in #premierepro. I cut up the audio into sections with different takes and methodically edited down to my favorites, trying to choose parts that blended well together. 6/8
When parts wouldn't blend well together, I'd just rewrite the lines and generate a few more takes in @elevenlabsio. It's almost like instantaneous ADR. Then I used #adobeaudition for shifting the pitch in the voices and adding reverb. 7/8
Last step was using the script as rolling credits and put it on an image I made in #midjourney. I added the audio wave in After Effects. 8/8
• • •
Missing some Tweet in this thread? You can try to
force a refresh
I wonder what the future of UX design (maybe apps in general) may be like if AI really allows us to customize our experience. Not to mention blend programs together through a 3rd party/custom UI if an AI can understand onscreen what's being displayed by the app's GUI. 1/5
Combined with no code platforms of the future and advanced templates, you could probably do weird stuff like Premiere Pro x Unreal Engine x a fighting game template x an anime you like and custom gameify your interface. 2/5
Or maybe you could just submit to a chat AI to combine several apps/aesthetics together and present different connections and gameification strategies based on knowledge of UI/UX design. 3/5
Lately I've been thinking about how much of "reality" is a negotiation with useful illusions and the material world. I think it's safe to say that a portion of how we view things is through shortcuts and narratives. 1/11
To what degree we engage in fictions probably differs from person to person. Some believe the entire thing is a fiction passed on to us from evolution to navigate the food chain. Others think they concretely engage in reality the whole time. 2/11
Personally I think it's interesting how much of the world we can't “see” except through technology or layers of reasoning—radio waves, germs, the financial system, justice. These aren’t simple things we just look at and easily have collective intuition about. 3/11
Reused this animation I made from a post from a few months ago. Rendered in #UE5@UnrealEngine using a @daz3d model with #realtime clothing in the @triMirror plugin. Walk was from the #daz store. Hair was from the Epic Marketplace. 2/6 #aiartprocess
Used SD2’s #depth2img model running locally in Automatic1111. Thanks to @TomLikesRobots for the help getting it working! And showing how the model retains more consistency than normal img2img. I basically did an img2img batch process on the image sequence. 3/6 #aiartprocess
A thought on resistance to change. I recently had a convo with a friend of mine who went thru a serious breakup that left her rattled. She talked about how hard it was to let go of the future she had envisioned for herself; that she felt so sure was going to come. 1/7
I feel like part of the resistance to change isn’t just rooted in the past and present, but also your perception of how you thought the world was going to look like and your place in it. Expectations are set and not met. 2/7
It’s like trying to turn a race car, the more momentum, the more energy it will take to change course. It’s not true in all cases, but when it is true, it can be an incredible struggle. The weight of disappointment can be a terrible burden. 3/7
I’m so fascinated by how much of understanding a concept can sometimes just be a language issue. Being able to ask #chatgpt to summarize, expand, rephrase and format explanations in different ways is so refreshing. 1/7
Like here’s #ChatGPT explaining how to cook a steak in pseudo code format. 2/7
Here I asked #ChatGPT to explain how version control works in @github but in the context of an anime scene from My Hero Academia. 3/7
I could have created a similar scene in just Unreal Engine and Quixel but I wanted to see what I could do with this landscape image I generated in #midjouney 2/8 #aiartprocess
I'm also trying to do more collaborations with other AI Artists so I used this as an excuse to research depth maps further and see how I could push them. I generated this LeRes depth map using "Boosting Monocular Depth Estimation to High Resolution" on Github. 3/8 #aiartprocess