Assistant Professor @WisconsinCS. Formerly postdoc @StanfordAILab, Ph.D. @Cornell. Making AI safe and reliable for the open world.
Feb 3, 2022 • 10 tweets • 3 min read
How can we make neural networks learn both the knowns and unknowns? Check out our #ICLR2022 paper “VOS: Learning What You Don’t Know by Virtual Outlier Synthesis”, a general learning framework that suits both object detection and classification tasks. 1/n
arxiv.org/abs/2202.01197
(2/) Joint work with @xuefeng_du@MuCai7. Deep networks often struggle to reliably handle the unknowns. In self-driving, an object detection model trained to recognize known objects (e.g., cars, stop signs) can produce a high-confidence prediction for an unseen object of a moose.
Oct 9, 2020 • 9 tweets • 2 min read
Suffering from overconfident softmax scores? Time to use energy scores!
Excited to release our NeurIPS paper on "Energy-based Out-of-distribution Detection", a theoretically motivated framework for OOD detection. 1/n
Paper: arxiv.org/abs/2010.03759 (w/ code included)
(2/) Joint work w/ Weitang Liu, Xiaoyun Wang, and John Owens. We show that energy is desirable for OOD detection since it is provably aligned with the probability density of the input—samples with higher energies can be interpreted as data with a lower likelihood of occurrence.