Sharing ideas on how to disseminate your research.

"I am THRILLED to share that our paper is accepted to ..."

Congrats! So what's next? No one is going to browse through the list of thousands of accepted papers. Ain't nobody got time for that.

Check out 🧵below for examples.
*Website*

Use memorable domain names for your project website so that people can easily find/share the link. No university account? That's okay. Register a new name for GitHub pages.

Examples:
oops.cs.columbia.edu
crowdsampling.io
robust-cvd.github.io
*Acronym*

Make it easy for people to remember and refer to your work. As David Patterson said, the vowel is important.

For example, NeRF sounds waaaaaay cooler than NRF..
*Result video*

Make a simple video showing the killer results from your work! Based on my back-of-the-envelope calculation, I would have to present this ECCV paper in the zoom poster session for 18 years straight to reach the same level of visibility.

*Paper video*

Having a short video introducing the essence of the paper is arguably THE BEST.

Examples I like

(@jon_barron)
(@AbeDavis)
(@JPKopf)
*Downloadable results*

Do not put your result videos on YouTube and then embed them on your website. Make it super easy for people to download (and share) your results. Help people help share your work.

No image/video size limits so share the highest quality possible.
*Additional results*

Very often you need to work hard constructing the baseline results across multiple datasets. Make them available so that people can easily follow up. For example, many citations of this work are not for specific technical contributions.
*Supplementary website*

Organize all the results across multiple datasets, methods, and ablation on a webpage. This allows EVERYONE (including myself) to interactively explore the results.

Example:
alex04072000.github.io/ObstructionRem…
*GitHub*

Don't simply dump unorganized research code to GitHub. Write clear instructions and simplify the steps required to get the code running.

Examples I like:
- github.com/junyanz/pytorc… (@junyanz89)
- github.com/NVlabs/stylega…
*Colab*

Not everyone has the knowledge/resources to set up the environment required for your code on GitHub. Preparing a colab notebook demo (or other platforms) allows everyone to play around with your method.
*arXiv*

Host all your papers on arXiv. It's very frustrating to papers behind the paywall. Posting your paper at a specific time further increases the visibility/readership/impact of your work.

*Teaser image/video*

On your publication page, show teaser images/videos so that people can quickly browse through all your work. Work hard to optimize the quality of your teaser! Trust me, it's definitely worth your time.

filebox.ece.vt.edu/~jbhuang/#pubs
*Engagement*

When sharing on social media or other sites (e.g., Twitter, YouTube, HackerNews, Reddit...), engage with people who comment on your work even tho sometimes you may encounter comments with bad intentions. Over time, they will be your best allies.
* Hyperlinks*

Make sure every page has hyperlinks to every other pages. For examples, add links to authors’ pages, related projects, GitHub/Colab, datasets, YouTube videos additional results. Make it easy to navigate the contents via multiple paths.
*BibTeX*

Everyone knows that bibtex from google scholar is erroneous. Do your readers a favor and help people cite your paper correctly.
*Paper title*

A title should capture what is SPECIAL about the paper. Check out the talk by Jitendra Malik about the paper title.

BTW, the entire workshop is awesome. Check them out!
cc.gatech.edu/~parikh/citize…
*Music*

Whaaaaat?! How is music related to my research? YES, it helps make your video more engaging and fun to watch. If possible, match the transitions with music beats.

Examples I like:




(@holynski_)
*Website template*

Don't know how to write HTML responsive web design? A good template is your friend!

Examples I like:
nerfies.github.io
richzhang.github.io/colorization/
alex04072000.github.io/ObstructionRem…
*Links to concurrent work*

Provide readers a complete landscape of concurrent work.

"Credits are not like money. Giving credit to others does not diminish the credit you get from your paper." - Simon Peyton Jones

Examples:
phog.github.io/snerg/
alexyu.net/plenoctrees/

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jia-Bin Huang

Jia-Bin Huang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jbhuang0604

21 Jan
Get into your slides!

I recently found an easy setup to get into my slides. Compared to the standard zoom setup, it's fun, engaging, and allows me to interact with the slide contents directly.

Check out the thread below and set it up for your own presentation!
I mainly follow the excellent video tutorial by @cem_yuksel
but with a poor man's variant (i.e., without a white background or green screen).

Make sure to check out the videos for the best quality!
Step 1: Download Open Broadcaster Software (OBS) studio. obsproject.com

Why: We will use OBS to composite the slides and your camera video feed together and create a "virtual camera".

You can then use the virtual camera for your video conferencing presentation.
Read 8 tweets
14 Jan
Neural Volume Rendering for Dynamic Scenes

NeRF has shown incredible view synthesis results, but it requires multi-view captures for STATIC scenes.

How can we achieve view synthesis for DYNAMIC scenes from a single video? Here is what I learned from several recent efforts.
Instead of presenting Video-NeRF, Nerfie, NR-NeRF, D-NeRF, NeRFlow, NSFF (and many others!) as individual algorithms, here I try to view them from a unifying perspective and understand the pros/cons of various design choices.

Okay, here we go.
*Background*

NeRF represents the scene as a 5D continuous volumetric scene function that maps the spatial position and viewing direction to color and density. It then projects the colors/densities to form an image with volume rendering.

Volumetric + Implicit -> Awesome!
Read 16 tweets
13 Jan
Semi-supervised learning with consistency regularization and pseudo-labeling works great for CLASSIFICATION.

But how about STRUCTURED PREDICTION tasks? 🤔

Check out @ylzou_Zack's #ICLR2021 paper on designing pseudo-labels for semantic segmentation.
yuliang.vision/pseudo_seg/
How do we get pseudo labels from unlabeled images?

Unlike classification, directly thresholding the network outputs for dense prediction doesn't work well.

Our idea: start with weakly sup. localization (Grad-CAM) and refine it with self-attention for propagating the scores.
Using two different prediction mechanisms is great bc they make errors in different ways. With our fusion strategy, we get WELL-CALIBRATED pseudo labels (see the expected calibration errors in E below) and IMPROVED accuracy under 1/4, 1/8, 1/16 of labeled examples.
Read 6 tweets
19 Dec 20
Sharing some LaTeX hacks I like (and trying to crowdsource more)!

*Teaser*

Popularized by Randy Pausch's paper in 1996, now most papers start with a teaser. Make sure that you have an awesome one.

\twocolumn[{
\renewcommand\twocolumn[1][]{#1}
\maketitle
\input{teaser}
}] Image
*Table formatting*

I feel that 10% of my job is to replace \hline with \toprule, \midrule, and \bottomrule. Formatting your table well will help you convey your messages much more clearly.

Check out: people.inf.ethz.ch/markusp/teachi…
*Quickly remove in-line comments*

This hack can quickly help estimate paper length w/o comments, particularly helpful when you close to the submission deadline!

\usepackage{ifthenifthen}
\newcommand{\final}{1}
\ifthenelse{\equal{\final}{1}}
{
\renewcommand{\jiabin}[1]{}
}{}
Read 5 tweets
13 Dec 20
Have you ever wondered why papers from top universities/research labs often appear in the top few positions in the daily email and web announcements from arXiv?

Why is that the case? Why should I care?
Wait a minute! Does the article position even matter?

It matters!

See arxiv.org/abs/0907.4740

-> Articles in position 1 received median numbers of citations 83%, 50%, and 100% higher than those lower down in three communities.
So you get a significantly higher visibility boost, wider readership, and long-term citations and impacts by ...

simply putting your paper on the top position in the articles!

Crazy huh?
Read 6 tweets
12 Dec 20
How can we turn causal videos into 3D? Excited to share our work on Robust Consistent Video Depth Estimation.

Project: robust-cvd.github.io
Paper: arxiv.org/abs/2012.05901

w/ @JPKopf @jastarex

Check out the 🧵below!
We start by examining our Consistent Video Depth Estimation (CVD) in SIGGRAPH 2020 (work led by the amazing @XuanLuo14).

roxanneluo.github.io/Consistent-Vid…

The method achieves AWESOME results but requires precise camera poses as inputs.
Isn't SLAM/SfM a SOLVED problem? You might ask.

Yes, it works pretty well for static and controlled environments. For causal videos, existing methods usually fail to register all frames or produce outlier poses with large errors.

As a result, CVD works only *when SFM works*.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!