Sharing tips on preparing your presentation slides

Just attend many thesis presentations and qual exams at the end of the semester. I compiled some common pitfalls here and hopefully would be helpful to some.

Check out the thread 🧵below!

I am surprised to see so many talks starting with the OUTLINE.

No one, literally no one, will be excited by the: "I will first introduce the problem, then I discuss related work, next I present our method, I show some results, and conclude the talk".
*Be concise*

Do not treat your slides as a script.

Rule of thumbs for my students preparing a talk:
• Never write full sentences (unless quoting)
• Always write one-liners
• No more three lines of texts per slides

The tables in your paper/thesis are very informative with all the citations and compared methods. This is great. But, it's a disaster to present them as is in your talk.

No one knows what [17], [39] mean. Highlight and interpret the key results for your audience.

Explain how to read your plots. What does x-axis/y-axis mean? What do different lines mean? What can we learn from this plot?

If you plan to skip the discussion of some figures, just remove them.
*Informative slides title*

Don't use the most salient part of slides to show "Results", "Visual comparison", "Ablation study"

The title should describe the TAKEAWAY message from that slide.
*Final slide*

Avoid stoping at the "Thank you slide" in the end. Show the main results/conclusion/contributions of your work as your final slides. This helps remind people what you have done and helps them to ask good questions.

Many wonderful resources online. Check them out! A few pointers:

Patrick Henry Winston:…

Matt Might:…

Kristen Grauman:

Use animation to break down a complicated diagram/figure/concept and describe them step by step.

When advancing the slides, make sure that all the components are perfectly aligned to reduce mental load.

Insert the video (no YouTube embedding please) and use animation to control the timing to show up, play the video, and stop the video. Otherwise you may be busy trying to figure out where your cursor is to play the video during the talk.
*Level of details*

Students tend to squeeze as much paper/thesis content as possible into the talk. This is understandable as all those are hard work.

But remember that your audience will be much happier to see a concise and clear talk.

If you plan to point to some number/texts/figure in your slides, add an arrow/box/circle pointing to that (with animation). Don't use your mouse pointer.

Don't use a laser pointer for in-person talks as well. Nothing is more annoying than tracking a shaking red dot.
That's all (for now)! Happy presenting!

What are your favorite presentation tips (or the practice you hate the most)?

When you want to emphasize/highlight some take-home message or important concepts, make sure to “de-emphasize” all the rest of contents as well.

How? Add a square blocking the contents with 10% transparency.

• • •

Missing some Tweet in this thread? You can try to force a refresh

Keep Current with Jia-Bin Huang

Jia-Bin Huang Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!


Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @jbhuang0604

11 May
Understanding ML/CV papers 📰

• Ground truth label:
Some guy says so.

• Learning from unlabeled data:
Learning from carefully curated ImageNet and pretend that we don't know the labels.

• Parameter empirically determined:
Tried many paras and this has the best number.
• Interpretable classification:
Showing some cherry-picked blurry heat maps.

• Code and data available upon acceptance:
Accept this paper first, then we will consider releasing them when we finish the follow-up paper.
• User study:
My labmates think our results look better.

• Analysis-by-synthesis:
Tuning the model until it looks good.

• To the best of our knowledge, we are the first...:
Did not see this on Twitter
Read 4 tweets
6 Apr
Sharing ideas on how to disseminate your research.

"I am THRILLED to share that our paper is accepted to ..."

Congrats! So what's next? No one is going to browse through the list of thousands of accepted papers. Ain't nobody got time for that.

Check out 🧵below for examples.

Use memorable domain names for your project website so that people can easily find/share the link. No university account? That's okay. Register a new name for GitHub pages.


Make it easy for people to remember and refer to your work. As David Patterson said, the vowel is important.

For example, NeRF sounds waaaaaay cooler than NRF..
Read 19 tweets
21 Jan
Get into your slides!

I recently found an easy setup to get into my slides. Compared to the standard zoom setup, it's fun, engaging, and allows me to interact with the slide contents directly.

Check out the thread below and set it up for your own presentation!
I mainly follow the excellent video tutorial by @cem_yuksel
but with a poor man's variant (i.e., without a white background or green screen).

Make sure to check out the videos for the best quality!
Step 1: Download Open Broadcaster Software (OBS) studio.

Why: We will use OBS to composite the slides and your camera video feed together and create a "virtual camera".

You can then use the virtual camera for your video conferencing presentation.
Read 8 tweets
14 Jan
Neural Volume Rendering for Dynamic Scenes

NeRF has shown incredible view synthesis results, but it requires multi-view captures for STATIC scenes.

How can we achieve view synthesis for DYNAMIC scenes from a single video? Here is what I learned from several recent efforts.
Instead of presenting Video-NeRF, Nerfie, NR-NeRF, D-NeRF, NeRFlow, NSFF (and many others!) as individual algorithms, here I try to view them from a unifying perspective and understand the pros/cons of various design choices.

Okay, here we go.

NeRF represents the scene as a 5D continuous volumetric scene function that maps the spatial position and viewing direction to color and density. It then projects the colors/densities to form an image with volume rendering.

Volumetric + Implicit -> Awesome!
Read 16 tweets
13 Jan
Semi-supervised learning with consistency regularization and pseudo-labeling works great for CLASSIFICATION.

But how about STRUCTURED PREDICTION tasks? 🤔

Check out @ylzou_Zack's #ICLR2021 paper on designing pseudo-labels for semantic segmentation.
How do we get pseudo labels from unlabeled images?

Unlike classification, directly thresholding the network outputs for dense prediction doesn't work well.

Our idea: start with weakly sup. localization (Grad-CAM) and refine it with self-attention for propagating the scores.
Using two different prediction mechanisms is great bc they make errors in different ways. With our fusion strategy, we get WELL-CALIBRATED pseudo labels (see the expected calibration errors in E below) and IMPROVED accuracy under 1/4, 1/8, 1/16 of labeled examples.
Read 6 tweets
19 Dec 20
Sharing some LaTeX hacks I like (and trying to crowdsource more)!


Popularized by Randy Pausch's paper in 1996, now most papers start with a teaser. Make sure that you have an awesome one.

}] Image
*Table formatting*

I feel that 10% of my job is to replace \hline with \toprule, \midrule, and \bottomrule. Formatting your table well will help you convey your messages much more clearly.

Check out:…
*Quickly remove in-line comments*

This hack can quickly help estimate paper length w/o comments, particularly helpful when you close to the submission deadline!

Read 5 tweets

Did Thread Reader help you today?

Support us! We are indie developers!

This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!