How can we turn causal videos into 3D? Excited to share our work on Robust Consistent Video Depth Estimation.

Project: robust-cvd.github.io
Paper: arxiv.org/abs/2012.05901

w/ @JPKopf @jastarex

Check out the 🧵below!
We start by examining our Consistent Video Depth Estimation (CVD) in SIGGRAPH 2020 (work led by the amazing @XuanLuo14).

roxanneluo.github.io/Consistent-Vid…

The method achieves AWESOME results but requires precise camera poses as inputs.
Isn't SLAM/SfM a SOLVED problem? You might ask.

Yes, it works pretty well for static and controlled environments. For causal videos, existing methods usually fail to register all frames or produce outlier poses with large errors.

As a result, CVD works only *when SFM works*.
How can we make video depth estimation ROBUST?

Our idea: *Joint optimization* of depth and camera poses.

However, optimizing only depth scale or finetuning depth network results in the poor estimation of camera pose trajectory (c-d) due to depth misalignments.
We resolve this problem by replacing the per-frame camera scale with a more flexible *spatially-varying transformation*. The improved alignment of the depth enables computing smoother and more accurate pose trajectories!
The flexible depth deformation is great, but can only achieve low-frequency alignment of depth maps. We further introduce a spatio-temporal geometry-aware depth filter (following flow trajectories) to improve fine depth details.
To validate the robustness, we show that we can estimate the consistent depth and smooth camera poses on all 90 videos on the DAVIS dataset.

No cherry-picking!

Can't wait to see how this will lead to *3D-aware* video recognition/synthesis and beyond!
Also, please check out results on extracting long smooth camera trajectories.

Hopefully, we will have something similar to the hologram in the movie Minority Report (2002) soon! 😍

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jia-Bin Huang #Masks4All

Jia-Bin Huang #Masks4All Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jbhuang0604

13 Dec
Have you ever wondered why papers from top universities/research labs often appear in the top few positions in the daily email and web announcements from arXiv?

Why is that the case? Why should I care?
Wait a minute! Does the article position even matter?

It matters!

See arxiv.org/abs/0907.4740

-> Articles in position 1 received median numbers of citations 83%, 50%, and 100% higher than those lower down in three communities.
So you get a significantly higher visibility boost, wider readership, and long-term citations and impacts by ...

simply putting your paper on the top position in the articles!

Crazy huh?
Read 6 tweets
11 Dec
How can we learn NeRF from a SINGLE portrait image? Check out @gaochen315's recent work leverages new (meta-learning) and old (3D morphable model) tricks to make it work! This allows us to synthesize new views and manipulate FOV.

Project: portrait-nerf.github.io
Work led by the amazing Chen Gao (@gaochen315) and in collaboration with friends from Google (Yichang Shih, Wei-Sheng Lai, and Chia-Kai Liang).

Paper: arxiv.org/abs/2012.05903
So, how does it work?

Training a NeRF from a single image from scratch won't work because it cannot recover the correct shape. The rendering results look fine at the original viewpoint but produce large errors at novel views.
Read 6 tweets
10 Dec
Congratulations Jinwoo Choi for passing his Ph.D. thesis defense!

Special thanks to the thesis committee members (Lynn Abbott, @berty38, Harpreet Dhillon, and Gaurav Sharma) for valuable feedback and advices. Image
Jinwoo started his PhD in building an interactive system for home-based stroke rehabilitation. Published at ASSETS 17 and PETRA, 2018.

The preliminary efforts lay the foundation for a recent 1.1 million NSF Smart and Connected Health award! Image
He then looked into scene biases in action recognition datasets and presented debiasing methods that lead to improved generalization in downstream tasks [Choi NeurIPS 19]. chengao.vision/SDN/
Read 5 tweets
6 Jul
Sharing one idea I found useful for paper writing:

Do NOT ask people to solve correspondence problems.

Some Dos and Don'ts examples below:

*Figures*: Don't ask people to match (a), (b), (c) ... with the descriptions in the figure caption.
*Figure caption*

Use "self-contained" caption. It's annoying to dig into the texts and match them to the figures. Ain't nobody got time for that! ⌚️

Also, add a figure "caption title" (in bold fonts). It allows readers to navigate through figures quickly.
*Notations*

Give specific, meaningful names to your math notations. For example, the readers won't need to go back and forth to figure what each term means.
Read 10 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!