5/ We've also used Instant-NGP in the past, but have found that @nerfstudioteam does "drone" shots better because of how it handles large unbounded scenes.
Below you can see the same footage fed into Instant-NGP for comparison
6/ Instant-NGP tends to create lots of cloudy artifacts. Those floaters can look really cool and stylistic though - here's a previous NeRF I did with @jperldev that leans into the cloudy look
2/ First I used DALL-E to generate outfits. I did this by erasing parts of my existing outfit and inpainting over it
Btw when I erased the entire outfit, the results didn't look as good. By keeping parts of the original, DALL-E was able to better match color and lighting
3/ But here’s the challenge. DALL-E works great for individual pictures, but it’s not designed for video. It won’t give you consistency from frame to frame.
Here was my early experiment. See, no consistency between frames
@MichaelCarychao 2/ Here's the raw footage. I started off by creating a simple AR filter of two empty white rectangles using Adobe Aero. Shot by @AustinGumban
@MichaelCarychao@AustinGumban 3/ Then @MichaelCarychao generated the nature side. Here's a peek at his behind-the-scenes process. Highly recommend checking out his account for more AI art ideas
@OpenAI@literallydenis 2/ Something that AI headlines don't always capture is that as a human, you actually have a lot of artistic input in what the AI paints. The AI didn't draw all this automatically - I prompted it to draw certain elements
@OpenAI@literallydenis 3/ Here's a sampling of the prompts I used. For each prompt, I added "painting by Johannes Vermeer" at the end for style
2/ For every frame you see, it generates 8 frames in between with incredibly smoothness and accuracy. Its main use case is to create artificial (and very convincing) slow motion on clips, but I thought it'd be interesting to apply it to stop motion to create "impossible" movement
3/ You can try DAIN by going to grisk.itch.io/dain-app. It works on Windows and NVIDIA only (it requires a TON of GPU power) and isn't the easiest to set up or run. But I do believe that this technology will become more mainstream
2/ Here's how - walked around me in circles a few times, pointing his phone at me. We fed the video into Instant-NGP which created a NeRF out of the footage.
3/ It's kinda like a 3D model, but instead of creating mesh+textures it’s more like a point cloud that changes color depending on what angle you view it from. This creates beautiful surreal lighting effects - you can see how the light hits differently as we change camera angles.
I used @OpenAI#dalle2 to create the first ever AI-generated magazine cover for @Cosmopolitan!! The prompt I used is at the end of the video #dalle
1/ For something like this, there was a TON of human involvement and decision-making. While each attempt takes only 20 seconds to generate, it took hundreds of attempts.
Hours and hours of prompt generating and refining before getting the perfect image
2/ I think the natural reaction is to fear that AI will replace human artists. Certainly that thought crossed my mind. But the more I use #dalle2, the less I see this as a replacement for humans, and the more I see it as tool for humans to use - an instrument to play