Tomorrow @latentjasper @balajiln and I present a #NeurIPS2020 tutorial on "Practical Uncertainty Estimation and Out-of-Distribution Robustness in Deep Learning". Whether you're new to the area or an expert, there is critically useful info! 8-10:30a PT nips.cc/virtual/2020/t…
The talk is split into three sections: 1. Why Uncertainty & Robustness; 2. Foundations; and 3. Recent Advances.
Tutorials do _not_ need registration to attend!
See everyone at the conference!

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Dustin Tran

Dustin Tran Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @dustinvtran

7 Dec
Snippet 1 from the #NeurIPS2020 tutorial: @balajiln What do we mean by uncertainty and out-of-distribution robustness? nips.cc/virtual/2020/t… ImageImage
Snippet 2: On even simple benchmarks, neural networks are not only poor at generalizing to OOD but also degrade in their uncertainty estimates. Image
Snippet 3: There are a number of applications where uncertain & robust methods are already being used. It's at the heart of many AI & ML areas. Image
Read 27 tweets
7 Sep
How I spent this weekend: upgrading my battlestation.
Cable management is a huge quality of life improvement.
KVM switch also helps for managing personal and work computers.
Read 4 tweets
13 Dec 19
Check out BatchEnsemble: Efficient Ensembling with Rank 1 Perturbations at the #NeurIPS2019 Bayesian DL workshop. Better accuracies and uncertainty than dropout and competitive with ensembles across a wide range of tasks. 1/- Image
It’s a drop in replacement for individual layers, like dropout, batchnorm, and variational layers and is available with baselines and Bayesian layers at github.com/google/edward2. 2/-
Unlike ensembles, it’s trained end to end under a single loss function (NLL) and computation can be parallelized across ensemble members in GPU/TPUs. BatchEnsemble is like a new parameterization for neural nets. 3/-
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!