Pushed #gen2 again & made a movie trailer. #aicinema is finally here!
Every shot made from text prompts, except one iconic shot you all know, done with #gen1
Made possible by @runwayml @bazluhrmann your movies been a great inspiration! 😍🎞️🙏
Voices: @elevenlabsio #aianimation
Some stats about used AI Tools:
☑️500+ shots generated with #gen2 beta to get 65 shots that made it into the movie
☑️5000 credits used to generate 3 custom voices with @elevenlabsio, that fitted my taste of timbre and likeness
☑️Initial idea by me, script co-created with #Chatgpt
Some stats about the film edit process:
☑️Music: the most important part for me! Some shots inspired me to find 2 tracks, put them together with the voices first, imagining the film in my head only
☑️Pace, timing, narration - all of that was done with the soundtrack first.
👇
☑️Film editing with about 250-300 shots that I already had, putting them together like a puzzle in several hours
☑️Finetuning prompts for shots that needed improvement, re-generating, also generating new once to tell the story better
☑️Sound FX to enhance athmosphere & story
👇
☑️Final refinements and minor tweaks to the film
☑️Designing titles & Thumbnail with #midjourneyv5 and #photoshop
☑️Share with the world
For those who believe that AI will do everything for you: No!
It can... and will soon.
I`ll always prefer to put my own heart & soul in.🙏
For those who wanna support my new AI Artist account, please give @visiblemaker a follow!
.
Also find me on youtube, for a better viewing quality.
Check my linktree for links to IRL filmmaking works.
.
Happy to answer questions in the comments.
Thanks! 🙏
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Pushed #gen2 to its limits the first day & made a short.
AI animation will never be the same! I am deeply impressed as a filmmaker for what is possible now and where we are already in terms of quality.
It will only get better from here on. 😍🎞️🙏 Thanks @runwayml 🥳 #aianimation
"Taste of Duality" was made within the first day of my beta access to gen-2.
I started with image + text inputs, but soon discovered text only prompting gives some more repeatable and refined outputs. Using --seed & slight text changes felt almost like directing! 🎦 #runwayml
When doing text only prompting it felt like being not too specific with the things you want gives better results. So instead of defining every aspect of your vision, try to describe it in a more general, open way, that the AI can step in with its own creative realisation.