Justin Alvey Profile picture
Feb 16 6 tweets 3 min read
I wanted to imagine how we’d better use #stablediffusion for video content / AR.

A major obstacle, why most videos are so flickery, is lack of temporal & viewing angle consistency, so I experimented with an approach to fix this

See 🧵 for process & examples
Ideally you want to learn a single representation of an object across time or different viewing directions to perform a *single* #img2img generation on.

For this I used layered-neural-atlases.github.io (2021)
This learns an "atlas" to represent an object and its background across the video.

Regularization losses during training help preserve the original shape, with a result that resembles a usable slightly "unwrapped" version of the object Image
The authors of the paper recommend using Mask R-CNN for creating a segmentation mask before training, but for this I found it easier (and cleaner) to just create a mask with the Rotobrush in After Effects
Once the "atlas" was learned I could then run it through #depth2img, then use the new atlas to reproject across the video.

This last remapping part is quick so you could imagine it being rendered live based on your viewing angle for #AR (for a pre-generated scene)
Here are some more out there takes, including turning my couch into a jumping castle! 🏰🎈

There are endless possibilities here for content creation. Follow for more creative AI experiments!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Justin Alvey

Justin Alvey Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @justLV

Feb 2
We are getting closer to “Her” where conversation is the new interface.

Siri couldn’t do it, so I built an e-mail summarizing feature using #GPT3 and life-like #AI generated voice on iOS.

(🔈Audio on to be 🤯with voice realism!)

How did I do this? 👇
I used the Gmail API to feed in recent unread e-mails into a prompt and send to the @OpenAI #GPT3 Completion API. Calling out details such as not “just reading them out” and other prompt tweaks gave good results
@OpenAI Here are the settings I used, you can see how #GPT3 does a great job of conversationally summarizing. (For the sake of privacy I made up the e-mails shown in the demo)
Read 5 tweets
Jan 3
I used AI to create a (comedic) guided meditation for the New Year!

(audio on, no meditation pose necessary!)

Used ChatGPT for an initial draft, and TorToiSe trained on only 30s of audio of Sam Harris

See 🧵 for implementation details
ChatGPT came up with some creative ideas, but the delivery was still fairly vanilla, so I iterated on it heavily and added a few Sam-isms from my experience with the @wakingup app (Jokes aside - highly recommended) Image
@wakingup Diffusion models & autoregressive transformers are coming for audio!

Text-To-Speech was created using github.com/neonbjb/tortoi…

I also highly enjoyed reading the author's blog nonint.com
Read 5 tweets
Dec 20, 2022
I used the #StableDiffusion 2 Depth Guided model to create architecture photos from dollhouse furniture.

By using a depth-map you can create images with incredible spatial consistency without using any of the original RGB image.

See 🧵
2/ This model is unique as it was fine-tuned from the Stable Diffusion 2 base with an extra channel for depth.

Using MiDaS (a model to predict depth from a single image), it can create new images with matching depth maps to your "init image"
3/ I set the denoising strength to 1.0 so that none of the original RGB image was used

Even with widely different prompts it was able to generate consistent objects

Using simple, recognizable shapes such as wooden doll-house furniture worked great for this
Read 8 tweets
Nov 1, 2022
1/ I created this with Stable Diffusion using image inpainting and “walking through the latent space”

Without using tweening, every frame is generated by an interpolated embedding and variable denoising strength, so keeping continuity was tricky

See 🧵for process
2/ First off, finding the right combination of prompt, seed and denoising strength for an #img2img in-painting is a roll of the dice

Luckily it is easy to script large batches to cherrypick
3/ The first and last pairs were just regular #img2img ramped through a range of denoising strength of 0 to 0.8
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us on Twitter!

:(