2/ First I used DALL-E to generate outfits. I did this by erasing parts of my existing outfit and inpainting over it
Btw when I erased the entire outfit, the results didn't look as good. By keeping parts of the original, DALL-E was able to better match color and lighting
3/ But here’s the challenge. DALL-E works great for individual pictures, but it’s not designed for video. It won’t give you consistency from frame to frame.
Here was my early experiment. See, no consistency between frames
4/ It’s generating completely different outfits for every frame. But I want the same outfit to persist for several frames, which DALL-E currently can’t do
5/ After a bunch of experimentation, I discovered a program called #EbSynth by @scrtwpns
It's intended for painting style transfers, but I wondered if it could work for clothes...
Demo video by EbSynth
6/ And it turns out, it DOES work for clothes!
It's not perfect, and if you look closely there are lots of artifacts, but it was good enough for me for this project
7/ Finally I ran the video through DAIN which smoothly blends from outfit to outfit
It had the added bonus of giving my video artificial slow mo
8/ Here’s another example of a video I did using DAIN with @jperldev a while back
@jperldev 9/ Shoutout to @paultrillo - he's been using DALL-E in fascinating ways and he inspired me to start experimenting with trying to get DALL-E to work for video
@MichaelCarychao 2/ Here's the raw footage. I started off by creating a simple AR filter of two empty white rectangles using Adobe Aero. Shot by @AustinGumban
@MichaelCarychao@AustinGumban 3/ Then @MichaelCarychao generated the nature side. Here's a peek at his behind-the-scenes process. Highly recommend checking out his account for more AI art ideas
@OpenAI@literallydenis 2/ Something that AI headlines don't always capture is that as a human, you actually have a lot of artistic input in what the AI paints. The AI didn't draw all this automatically - I prompted it to draw certain elements
@OpenAI@literallydenis 3/ Here's a sampling of the prompts I used. For each prompt, I added "painting by Johannes Vermeer" at the end for style
2/ For every frame you see, it generates 8 frames in between with incredibly smoothness and accuracy. Its main use case is to create artificial (and very convincing) slow motion on clips, but I thought it'd be interesting to apply it to stop motion to create "impossible" movement
3/ You can try DAIN by going to grisk.itch.io/dain-app. It works on Windows and NVIDIA only (it requires a TON of GPU power) and isn't the easiest to set up or run. But I do believe that this technology will become more mainstream
2/ Here's how - walked around me in circles a few times, pointing his phone at me. We fed the video into Instant-NGP which created a NeRF out of the footage.
3/ It's kinda like a 3D model, but instead of creating mesh+textures it’s more like a point cloud that changes color depending on what angle you view it from. This creates beautiful surreal lighting effects - you can see how the light hits differently as we change camera angles.
I used @OpenAI#dalle2 to create the first ever AI-generated magazine cover for @Cosmopolitan!! The prompt I used is at the end of the video #dalle
1/ For something like this, there was a TON of human involvement and decision-making. While each attempt takes only 20 seconds to generate, it took hundreds of attempts.
Hours and hours of prompt generating and refining before getting the perfect image
2/ I think the natural reaction is to fear that AI will replace human artists. Certainly that thought crossed my mind. But the more I use #dalle2, the less I see this as a replacement for humans, and the more I see it as tool for humans to use - an instrument to play