For this concept, I was inspired by the work of @model_mechanic - I wanted to try combining this concept with live action footage to interact with a real physical object
2/ First I used DALL-E to generate outfits. I did this by erasing parts of my existing outfit and inpainting over it
Btw when I erased the entire outfit, the results didn't look as good. By keeping parts of the original, DALL-E was able to better match color and lighting
3/ But here’s the challenge. DALL-E works great for individual pictures, but it’s not designed for video. It won’t give you consistency from frame to frame.
Here was my early experiment. See, no consistency between frames
@MichaelCarychao 2/ Here's the raw footage. I started off by creating a simple AR filter of two empty white rectangles using Adobe Aero. Shot by @AustinGumban
@MichaelCarychao@AustinGumban 3/ Then @MichaelCarychao generated the nature side. Here's a peek at his behind-the-scenes process. Highly recommend checking out his account for more AI art ideas
@OpenAI@literallydenis 2/ Something that AI headlines don't always capture is that as a human, you actually have a lot of artistic input in what the AI paints. The AI didn't draw all this automatically - I prompted it to draw certain elements
@OpenAI@literallydenis 3/ Here's a sampling of the prompts I used. For each prompt, I added "painting by Johannes Vermeer" at the end for style
2/ For every frame you see, it generates 8 frames in between with incredibly smoothness and accuracy. Its main use case is to create artificial (and very convincing) slow motion on clips, but I thought it'd be interesting to apply it to stop motion to create "impossible" movement
3/ You can try DAIN by going to grisk.itch.io/dain-app. It works on Windows and NVIDIA only (it requires a TON of GPU power) and isn't the easiest to set up or run. But I do believe that this technology will become more mainstream
2/ Here's how - walked around me in circles a few times, pointing his phone at me. We fed the video into Instant-NGP which created a NeRF out of the footage.
3/ It's kinda like a 3D model, but instead of creating mesh+textures it’s more like a point cloud that changes color depending on what angle you view it from. This creates beautiful surreal lighting effects - you can see how the light hits differently as we change camera angles.