Have you ever wondered why papers from top universities/research labs often appear in the top few positions in the daily email and web announcements from arXiv?
Why is that the case? Why should I care?
Wait a minute! Does the article position even matter?
-> Articles in position 1 received median numbers of citations 83%, 50%, and 100% higher than those lower down in three communities.
So you get a significantly higher visibility boost, wider readership, and long-term citations and impacts by ...
simply putting your paper on the top position in the articles!
Crazy huh?
How can I get my paper on the top?
Understand that arXiv is a "stack" data structure (first in last out, FILO)! Submit your paper RIGHT BEFORE THE DEADLINE will make your paper appear on the top! Probably the easiest way to improve your paper visibility.
Check out the submission deadlines below. Unfortunately, the deadlines would be hard to meet for Asian countries, e.g., 2:00 PM EST is 3:00 AM in Taiwan time.
This in some sense, also creates a disparity in terms of visibility across regions. :-(
Here is the submission time for papers at position 1 in CVPR [cs.CV] category last week. In most cases, people wait to submit until right before the deadline so that you get the top position (and therefore having more visibility, readership, and citations).
• • •
Missing some Tweet in this thread? You can try to
force a refresh
The method achieves AWESOME results but requires precise camera poses as inputs.
Isn't SLAM/SfM a SOLVED problem? You might ask.
Yes, it works pretty well for static and controlled environments. For causal videos, existing methods usually fail to register all frames or produce outlier poses with large errors.
How can we learn NeRF from a SINGLE portrait image? Check out @gaochen315's recent work leverages new (meta-learning) and old (3D morphable model) tricks to make it work! This allows us to synthesize new views and manipulate FOV.
Training a NeRF from a single image from scratch won't work because it cannot recover the correct shape. The rendering results look fine at the original viewpoint but produce large errors at novel views.
Congratulations Jinwoo Choi for passing his Ph.D. thesis defense!
Special thanks to the thesis committee members (Lynn Abbott, @berty38, Harpreet Dhillon, and Gaurav Sharma) for valuable feedback and advices.
Jinwoo started his PhD in building an interactive system for home-based stroke rehabilitation. Published at ASSETS 17 and PETRA, 2018.
The preliminary efforts lay the foundation for a recent 1.1 million NSF Smart and Connected Health award!
He then looked into scene biases in action recognition datasets and presented debiasing methods that lead to improved generalization in downstream tasks [Choi NeurIPS 19]. chengao.vision/SDN/