NeRF has shown incredible view synthesis results, but it requires multi-view captures for STATIC scenes.
How can we achieve view synthesis for DYNAMIC scenes from a single video? Here is what I learned from several recent efforts.
Instead of presenting Video-NeRF, Nerfie, NR-NeRF, D-NeRF, NeRFlow, NSFF (and many others!) as individual algorithms, here I try to view them from a unifying perspective and understand the pros/cons of various design choices.
Okay, here we go.
*Background*
NeRF represents the scene as a 5D continuous volumetric scene function that maps the spatial position and viewing direction to color and density. It then projects the colors/densities to form an image with volume rendering.
Volumetric + Implicit -> Awesome!
*Model*
Building on NeRF, one can extend it for handling dynamic scenes with two types of approaches.
A) 4D (or 6D with views) function.
One direct approach is to include TIME as an additional input to learn a DYNAMIC radiance field.
e.g., Video-NeRF, NSFF, NeRFlow
B) 3D Template with Deformation.
Inspired by non-rigid reconstruction methods, this type of approach learns a radiance field in a canonical frame (template) and predicts deformation for each frame to account for dynamics over time.
e.g., Nerfie, NR-NeRF, D-NeRF
*Deformation Model*
All the methods use an MLP to encode the deformation field. But, how do they differ?
A) INPUT: How to encode the additional time dimension as input?
B) OUTPUT: How to parametrize the deformation field?
A) Input conditioning
One can choose to use EXPLICIT conditioning by treating the frame index t as input.
Alternatively, one can use a learnable LATENT vector for each frame.
B) Output parametrization
We can either use the MLP to predict
- dense 3D translation vectors (aka scene flow) or
- dense rigid motion field
With these design choices in mind, we can mix-n-match to synthesize all the methods.
*Regularization*
Adding the deformation field introduces ambiguities. So we need to make it "well-behaved", e.g., the deformation field should be spatially smooth, temporally smooth, sparse, and avoid contraction and expansion.
*Depth supervision*
Unlike other methods above, Video-NeRF (shameless plug here) does not require a separate deformation field (and various other regularization terms) by using direct depth supervision to constrain the time-varying geometry.
With further improvement on single video depth estimation (another shameless plug 🤩), I am very excited to see dynamic view synthesis on videos in the wild soon!
How do we get pseudo labels from unlabeled images?
Unlike classification, directly thresholding the network outputs for dense prediction doesn't work well.
Our idea: start with weakly sup. localization (Grad-CAM) and refine it with self-attention for propagating the scores.
Using two different prediction mechanisms is great bc they make errors in different ways. With our fusion strategy, we get WELL-CALIBRATED pseudo labels (see the expected calibration errors in E below) and IMPROVED accuracy under 1/4, 1/8, 1/16 of labeled examples.
Have you ever wondered why papers from top universities/research labs often appear in the top few positions in the daily email and web announcements from arXiv?
Why is that the case? Why should I care?
Wait a minute! Does the article position even matter?
The method achieves AWESOME results but requires precise camera poses as inputs.
Isn't SLAM/SfM a SOLVED problem? You might ask.
Yes, it works pretty well for static and controlled environments. For causal videos, existing methods usually fail to register all frames or produce outlier poses with large errors.
How can we learn NeRF from a SINGLE portrait image? Check out @gaochen315's recent work leverages new (meta-learning) and old (3D morphable model) tricks to make it work! This allows us to synthesize new views and manipulate FOV.
Training a NeRF from a single image from scratch won't work because it cannot recover the correct shape. The rendering results look fine at the original viewpoint but produce large errors at novel views.
Congratulations Jinwoo Choi for passing his Ph.D. thesis defense!
Special thanks to the thesis committee members (Lynn Abbott, @berty38, Harpreet Dhillon, and Gaurav Sharma) for valuable feedback and advices.
Jinwoo started his PhD in building an interactive system for home-based stroke rehabilitation. Published at ASSETS 17 and PETRA, 2018.
The preliminary efforts lay the foundation for a recent 1.1 million NSF Smart and Connected Health award!
He then looked into scene biases in action recognition datasets and presented debiasing methods that lead to improved generalization in downstream tasks [Choi NeurIPS 19]. chengao.vision/SDN/