How can we learn NeRF from a SINGLE portrait image? Check out @gaochen315's recent work leverages new (meta-learning) and old (3D morphable model) tricks to make it work! This allows us to synthesize new views and manipulate FOV.
Training a NeRF from a single image from scratch won't work because it cannot recover the correct shape. The rendering results look fine at the original viewpoint but produce large errors at novel views.
How about transferring learning? Pretraining a NeRF on a collection of a multi-view face dataset.
Well, this works okay, but still performs poorly on *unseen* subjects due to diverse appearance and shape variations.
Our idea: Pretraining our neural implicit model so that it can QUICKLY ADAPT to an unseen subject using meta-learning.
(This is similar to another excellent project on learning initialization for implicit representations: matthewtancik.com/learnit)
In our application, however, the shapes of the subjects vary. We compensate for the shape variability using 3D morphable models and learn the implicit function in the *3D canonical coordinate*.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
Have you ever wondered why papers from top universities/research labs often appear in the top few positions in the daily email and web announcements from arXiv?
Why is that the case? Why should I care?
Wait a minute! Does the article position even matter?
The method achieves AWESOME results but requires precise camera poses as inputs.
Isn't SLAM/SfM a SOLVED problem? You might ask.
Yes, it works pretty well for static and controlled environments. For causal videos, existing methods usually fail to register all frames or produce outlier poses with large errors.
Congratulations Jinwoo Choi for passing his Ph.D. thesis defense!
Special thanks to the thesis committee members (Lynn Abbott, @berty38, Harpreet Dhillon, and Gaurav Sharma) for valuable feedback and advices.
Jinwoo started his PhD in building an interactive system for home-based stroke rehabilitation. Published at ASSETS 17 and PETRA, 2018.
The preliminary efforts lay the foundation for a recent 1.1 million NSF Smart and Connected Health award!
He then looked into scene biases in action recognition datasets and presented debiasing methods that lead to improved generalization in downstream tasks [Choi NeurIPS 19]. chengao.vision/SDN/