Sharon Y. Li Profile picture
Feb 3 10 tweets 3 min read
How can we make neural networks learn both the knowns and unknowns? Check out our #ICLR2022 paper “VOS: Learning What You Don’t Know by Virtual Outlier Synthesis”, a general learning framework that suits both object detection and classification tasks. 1/n

arxiv.org/abs/2202.01197
(2/) Joint work with @xuefeng_du @MuCai7. Deep networks often struggle to reliably handle the unknowns. In self-driving, an object detection model trained to recognize known objects (e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose.
(3/) The problem arises due to the lack of knowledge of unknowns during training time. Neural networks are typically optimized only on the in-distribution data. The resulting decision boundary, despite being useful on ID tasks, can be ill-fated for OOD detection. See Figure 1(b).
(4/) Ideally, a model should learn a compact decision boundary between ID and OOD data, like Fig 1(c). However, this is non-trivial due to the lack of supervision of unknowns. This motivates our paper: Can we synthesize virtual outliers for effective model regularization?
(5/) In this paper, we propose a novel unknown-aware learning framework dubbed VOS (Virtual Outlier Synthesis), which optimizes the dual objectives of both ID task and OOD detection performance.
(6/) VOS consists of three components tackling challenges of outlier synthesis and model regularization with synthesized outliers. Key to our method, we show that sampling in the feature space is more tractable than synthesizing images in the high-dimensional pixel space.
(7/) VOS offers several compelling advantages: (1) VOS is a general learning framework that is suitable for both object detection and classification tasks. (2) VOS enables adaptive outlier synthesis, which can be flexibly used without manual data collection or cleaning.
(8/) We evaluate our method on common OOD detection benchmarks, along with a more challenging yet underexplored task in the context of object detection. As part of our study, we also curated OOD test datasets that allow future research to evaluate object-level OOD detection.
(9/) More broadly, our work builds on the insights from the energy-based OOD learning framework and improves the regularization loss. We were also inspired by the early work on using GAN-based synthesis by @kimin_le2
(10/) Happy to get feedback if you have more detailed comments. The code and data is publicly available at: github.com/deeplearning-w…

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Sharon Y. Li

Sharon Y. Li Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @SharonYixuanLi

Oct 9, 2020
Suffering from overconfident softmax scores? Time to use energy scores!

Excited to release our NeurIPS paper on "Energy-based Out-of-distribution Detection", a theoretically motivated framework for OOD detection. 1/n

Paper: arxiv.org/abs/2010.03759 (w/ code included)
(2/) Joint work w/ Weitang Liu, Xiaoyun Wang, and John Owens. We show that energy is desirable for OOD detection since it is provably aligned with the probability density of the input—samples with higher energies can be interpreted as data with a lower likelihood of occurrence.
(3/) In contrast, we show mathematically that softmax confidence score is a biased scoring function that is not aligned with the density of the inputs and hence is not suitable for OOD detection.
Read 9 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(