Writing Related Work

I enjoy reading/writing the related work section of a paper. It helps organize prior research and put the contributions of the work in proper context.

But HOW? Check the thread below👇
*Divide and conquer*

No one likes to read 1-2 pages full of texts. Identify a couple of important “topics” relevant to your research. Add paragraph titles (\paragraph{}) so that it’s easy to navigate.
*Topic*

For each topic, write about
1) the TRAJECTORY of the research progress as a story and
2) the RELATIONSHIP of prior art and this paper.
*Trajectory*

Describe what the problem is, why is it challenging, and what people have done in this field to tackle the problem? Connect existing work into a clear research trajectory.
*Avoid laundry list*

Organizing and writing a topic as a clear trajectory is not easy. So instead of learning what to write, it’s often helpful learning what NOT to write.

No “authors A did blah blah. Author B did blah blah. Author C”. Focus on the work, not the people.
*Don’t use citations as nouns*

Your sentences should still be complete and correct even if you remove all the parenthetical citations.
*Don’t just describe, RELATE it*

In each topic, articulate the relationship between prior work and yours. Ex:

Our work is similar as we also …
Our work differs in …
Unlike/in contrast to …, we …
*Identifying the key differences*

Try finding ONE key contrastive concept to separate your work from others. Highlight them with \emph. Ex:

- Multiple -> Single
- Content-agnostic -> Content-aware
- Static -> Dynamic
*Be respectful*

Do not trash prior work. The authors may likely be your reviewer…
*Be generous*

Make sure that you cover all the important references. Giving people credits does not make your work less worthy.
That’s all! Hope this is helpful.

What are your favorite tips when writing the related work?
I just added a thread of Related Work examples I like. Check it out!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Jia-Bin Huang

Jia-Bin Huang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jbhuang0604

20 Jul
Example 1

Trajectory:
• Learning LR-HR -> challenge: large patch space -> learning mixture of models -> learning 1D profile -> high-level feature

Relationship:
• Contrastive concept: External vs. Internal (no learning)

Source: cv-foundation.org/openaccess/con…
Example 2

Trajectory:
• Applications of vision-based methods for assessment.
• Highlight the closest related work.

Relationship:
• Building upon the methodology... BUT, use deep learning.

Source: Deep Paper Gestalt arxiv.org/abs/1812.08775
Example 3

Trajectory:
• Video completion methods and their use for view synthesis.

Relationship:
• Contrastive concept: \emph{screen space} vs. \emph{3D space}.

Source: arxiv.org/abs/2011.12950
Read 5 tweets
25 May
Sharing tips on preparing your presentation slides

Just attend many thesis presentations and qual exams at the end of the semester. I compiled some common pitfalls here and hopefully would be helpful to some.

Check out the thread 🧵below!
*Outline*

I am surprised to see so many talks starting with the OUTLINE.

No one, literally no one, will be excited by the: "I will first introduce the problem, then I discuss related work, next I present our method, I show some results, and conclude the talk".
*Be concise*

Do not treat your slides as a script.

Rule of thumbs for my students preparing a talk:
• Never write full sentences (unless quoting)
• Always write one-liners
• No more three lines of texts per slides
Read 14 tweets
11 May
Understanding ML/CV papers 📰

• Ground truth label:
Some guy says so.

• Learning from unlabeled data:
Learning from carefully curated ImageNet and pretend that we don't know the labels.

• Parameter empirically determined:
Tried many paras and this has the best number.
• Interpretable classification:
Showing some cherry-picked blurry heat maps.

• Code and data available upon acceptance:
Accept this paper first, then we will consider releasing them when we finish the follow-up paper.
• User study:
My labmates think our results look better.

• Analysis-by-synthesis:
Tuning the model until it looks good.

• To the best of our knowledge, we are the first...:
Did not see this on Twitter
Read 4 tweets
6 Apr
Sharing ideas on how to disseminate your research.

"I am THRILLED to share that our paper is accepted to ..."

Congrats! So what's next? No one is going to browse through the list of thousands of accepted papers. Ain't nobody got time for that.

Check out 🧵below for examples.
*Website*

Use memorable domain names for your project website so that people can easily find/share the link. No university account? That's okay. Register a new name for GitHub pages.

Examples:
oops.cs.columbia.edu
crowdsampling.io
robust-cvd.github.io
*Acronym*

Make it easy for people to remember and refer to your work. As David Patterson said, the vowel is important.

For example, NeRF sounds waaaaaay cooler than NRF..
Read 19 tweets
21 Jan
Get into your slides!

I recently found an easy setup to get into my slides. Compared to the standard zoom setup, it's fun, engaging, and allows me to interact with the slide contents directly.

Check out the thread below and set it up for your own presentation!
I mainly follow the excellent video tutorial by @cem_yuksel
but with a poor man's variant (i.e., without a white background or green screen).

Make sure to check out the videos for the best quality!
Step 1: Download Open Broadcaster Software (OBS) studio. obsproject.com

Why: We will use OBS to composite the slides and your camera video feed together and create a "virtual camera".

You can then use the virtual camera for your video conferencing presentation.
Read 8 tweets
14 Jan
Neural Volume Rendering for Dynamic Scenes

NeRF has shown incredible view synthesis results, but it requires multi-view captures for STATIC scenes.

How can we achieve view synthesis for DYNAMIC scenes from a single video? Here is what I learned from several recent efforts.
Instead of presenting Video-NeRF, Nerfie, NR-NeRF, D-NeRF, NeRFlow, NSFF (and many others!) as individual algorithms, here I try to view them from a unifying perspective and understand the pros/cons of various design choices.

Okay, here we go.
*Background*

NeRF represents the scene as a 5D continuous volumetric scene function that maps the spatial position and viewing direction to color and density. It then projects the colors/densities to form an image with volume rendering.

Volumetric + Implicit -> Awesome!
Read 16 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!

:(