CoffeeVectors Profile picture
Feb 1, 2023 8 tweets 6 min read Read on X
Really impressed by the acting you can generate with @elevenlabsio #texttospeech! These are #AIvoices generated from text—dozens of "takes" stitched together. Breakdown thread: 1/8 #syntheticvoices #ainarration #autonarrated #aicinema #aiartcommunity #aiia #ai #MachineLearning
I started by having #chatGPT write a few rough drafts of a scene involving a panicked character calling her friend for help from a spaceship. I was going for something that would involve heightened emotions but not be too serious. 2/8
Then I wrote a short script using some of those ideas plus my own and put the whole thing into @elevenlabsio. I generated a few takes using low Stability (1-2%) and high Clarity (90-99%). Each take usually had parts I liked, or at least gave me ideas for direction. 3/8
I stuck to one voice I liked for simplicity. Changing voices can sometimes dramatically alter the sound to where it almost feels like diff mics were used. I decided I'd just change the pitch of the voices in post to differentiate them more. 4/8
After doing a few takes of the whole script, I generated individual lines. There I'd experiment with the "prompt" to see if I could direct the acting more by adding ellipses, diff punctuation, line breaks, and misspellings. Here's a sample of my history. 5/8
Then I laid everything out in #premierepro. I cut up the audio into sections with different takes and methodically edited down to my favorites, trying to choose parts that blended well together. 6/8
When parts wouldn't blend well together, I'd just rewrite the lines and generate a few more takes in @elevenlabsio. It's almost like instantaneous ADR. Then I used #adobeaudition for shifting the pitch in the voices and adding reverb. 7/8
Last step was using the script as rolling credits and put it on an image I made in #midjourney. I added the audio wave in After Effects. 8/8

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with CoffeeVectors

CoffeeVectors Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @CoffeeVectors

Oct 12
Made this video with iPhone photos I took of my friend Stephanie that I used as keyframes in @LumaLabsAI! With the camera controls I can gen transitions between shots. I also built a custom web app in Next.js to help me speedramp and edit all the clips! Breakdown 🧵(1/18)
Basically if you have some photos, throw in a start and end frame, and start your prompt with the camera move, and then stuff like “smooth camera, steadicam” I find minimal prompts work best. And don’t enhance the prompt (that tends to add hand held). (2/18)
Sometimes I’ll add ‘motion blur’, ‘drone racing’ or ‘music video’ to see how it changes the results. “Perfect face” can help reduce cross-eyes, etc. You’ll still need to experiment, but long prompts or prompts describing the scene don’t usually help with this effect. (3/18) Image
Read 18 tweets
Jul 20
Testing how LivePortrait works lip syncing 24fps lyrics on top of slow motion footage. Was curious to see if it might help with music videos. Quick explanation below! 🧵
Started with a clip from an Eminem song and passed it through Adobe Podcast to get the acapella. Passed that through @hedra_labs with a Midjourney portrait for the face animation. Used that as input into LivePortrait using ComfyUI and a slowmo clip from Die Hard.
I find it helps for the Live Portrait input to have a plain background. Otherwise you might get extra warping in the background behind the head.
Read 7 tweets
Dec 24, 2023
Made this video (🎶) with a Midjourney v6 image! Started by upscaling/refining with @Magnific_AI, pulled a Marigold Depth Map from that in ComfyUI, then used as a displacement map in Blender where I animated this camera pass with some relighting and narrow depth of field.🧵1/12
Here's the base image and the before/after in @Magnific_AI. Even though MJv6 has an upscaler, Magnific gave me better eyelid and skin details for this case. (Fun fact, this image was from a v4 prompt from summer last year, when MJ had just released a new beta upscaler.) 2/12
Image
Next step was using the new Marigold Depth Estimation node in ComfyUI to get an extremely detailed depth map. Note that I'm saving the result as an EXR file (important for adjusting levels later), and that the remap and colorizing nodes are just for visualization. 3/12 Image
Read 12 tweets
Nov 15, 2023
Testing LCM LORAs in an AnimateDiff & multi-controlnet workflow in ComfyUI. I was able to process this entire Black Pink music video as a single .mp4 input. The LCM lets me render at 6 steps (vs 20+) on my 4090 and uses up only 10.5 GB of VRAM. Here's a breakdown 🧵[1/11]
Entire thing took 81 minutes to render 2,467 frames, so about 2 seconds per frame. This isn't including the time to extract the img sequence from video and gen the ControlNet maps. Used Zoe Depth and Canny ControlNets in SD 1.5 at 910 x 512. [2/11]
Improving the output to give it a stronger style, more details & feel less rotoscope-ish, will require adjusting individual shots. But doing the entire video in one go lays down a rough draft for you to iterate on—build on fun surprises, troubleshoot problem areas. [3/11]
Read 11 tweets
May 25, 2023
Timelapse of using #photoshop’s new generative fill feature to connect two images and build a scene around them using blank prompts. Was inspired by @MatthieuGB’s post doing something similar! Notice how I’m not adding any descriptions, but letting gen fill present options for… twitter.com/i/web/status/1…
Here’s the final image! 2/4 Image
And here are the original images made in #midjourneyv51 3/4 ImageImage
Read 4 tweets
Feb 28, 2023
Testing Multi-ControlNet on a scene with extended dialog, a range of facial expressions, and head turning. No EBSynth used. #AImusic from aiva.ai. Breakdown🧵1/15 #aicinema #controlnet #stablediffusion #aiia #aiartcommunity #aiva #machinelearning #deeplearning #ai
Overall flow: pre-process video > img seq > play with prompts > initial controlnet settings > control net batch render > upscale > clean up in post > key out background > deflicker > video post-production 2/15
The approach I used here was figuring out an initial workflow. There’s definitely a lot to play with and improve on. The orig vid is low-res and a little blurry so I started with a pre-process pass by bringing out the edges/brightness & upscaling. 3/15
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(