2/ First I used DALL-E to generate outfits. I did this by erasing parts of my existing outfit and inpainting over it
Btw when I erased the entire outfit, the results didn't look as good. By keeping parts of the original, DALL-E was able to better match color and lighting
3/ But here’s the challenge. DALL-E works great for individual pictures, but it’s not designed for video. It won’t give you consistency from frame to frame.
Here was my early experiment. See, no consistency between frames
4/ It’s generating completely different outfits for every frame. But I want the same outfit to persist for several frames, which DALL-E currently can’t do
5/ After a bunch of experimentation, I discovered a program called #EbSynth by @scrtwpns
It's intended for painting style transfers, but I wondered if it could work for clothes...
Demo video by EbSynth
6/ And it turns out, it DOES work for clothes!
It's not perfect, and if you look closely there are lots of artifacts, but it was good enough for me for this project
7/ Finally I ran the video through DAIN which smoothly blends from outfit to outfit
It had the added bonus of giving my video artificial slow mo
8/ Here’s another example of a video I did using DAIN with @jperldev a while back
@jperldev 9/ Shoutout to @paultrillo - he's been using DALL-E in fascinating ways and he inspired me to start experimenting with trying to get DALL-E to work for video
It’s still early days of AI video - this tech will only get better. A whole new generation of filmmakers is gonna be able to make whatever they want on zero budget
NeRF update: Dollyzoom is now possible using @LumaLabsAI
I shot this on my phone. NeRF is gonna empower so many people to get cinematic level shots
Tutorial below -