NeRF update: Dollyzoom is now possible using @LumaLabsAI
I shot this on my phone. NeRF is gonna empower so many people to get cinematic level shots
Tutorial below -
2/ First, stand still. Have a friend take a quick video of you
Capture the area in front of you and behind you
3/ Here's the raw footage btw for anyone curious
(Feel free to download it so you can follow along)
4/ Upload footage to @LumaLabsAI
Wait (about 30 minutes) while Luma turns it into a NeRF
Disclosure: I am an advisor for Luma. They've asked me to test some features. All opinions are my own.
@LumaLabsAI 5/ Luma just released a new feature - the ability to change focal length. That's what we'll be using to achieve the dolly zoom effect
The trick to getting a dolly zoom is to keep the body size the same
Change the distance, then use focal length to keep your body size the same
@LumaLabsAI 6/ My low tech solution to measuring my body size: I stick a post-it note to my monitor
Then I increase the distance and then adjust the FOV so my body size stays the same
Lastly, I added some speed ramping in After Effects and color grading in Premiere
7/ Kinda crazy all that was made from a quick phone video
• • •
Missing some Tweet in this thread? You can try to
force a refresh
2/ First I used DALL-E to generate outfits. I did this by erasing parts of my existing outfit and inpainting over it
Btw when I erased the entire outfit, the results didn't look as good. By keeping parts of the original, DALL-E was able to better match color and lighting
3/ But here’s the challenge. DALL-E works great for individual pictures, but it’s not designed for video. It won’t give you consistency from frame to frame.
Here was my early experiment. See, no consistency between frames
@MichaelCarychao 2/ Here's the raw footage. I started off by creating a simple AR filter of two empty white rectangles using Adobe Aero. Shot by @AustinGumban
@MichaelCarychao@AustinGumban 3/ Then @MichaelCarychao generated the nature side. Here's a peek at his behind-the-scenes process. Highly recommend checking out his account for more AI art ideas
@OpenAI@literallydenis 2/ Something that AI headlines don't always capture is that as a human, you actually have a lot of artistic input in what the AI paints. The AI didn't draw all this automatically - I prompted it to draw certain elements
@OpenAI@literallydenis 3/ Here's a sampling of the prompts I used. For each prompt, I added "painting by Johannes Vermeer" at the end for style
2/ For every frame you see, it generates 8 frames in between with incredibly smoothness and accuracy. Its main use case is to create artificial (and very convincing) slow motion on clips, but I thought it'd be interesting to apply it to stop motion to create "impossible" movement
3/ You can try DAIN by going to grisk.itch.io/dain-app. It works on Windows and NVIDIA only (it requires a TON of GPU power) and isn't the easiest to set up or run. But I do believe that this technology will become more mainstream