Research Scientist at Google DeepMind. I lead evaluation at Gemini / Bard. AI, Bayesian statistics, deep learning.
May 28, 2021 • 12 tweets • 4 min read
What gripes do you have with LaTeX's default, and what you always add to papers? Here are mine: 1. Cleveref. Don't use "Section \ref{sec:intro}". Use \Cref{sec:intro}. This makes writing less error prone and it makes "Section" part of the hyperlink! texblog.org/2013/05/06/cle…2. Add colors to your hyperlinks. With hyperref, I like `\hypersetup{citecolor=MidnightBlue}`.
Dec 7, 2020 • 27 tweets • 13 min read
Snippet 1 from the #NeurIPS2020 tutorial: @balajiln What do we mean by uncertainty and out-of-distribution robustness? nips.cc/virtual/2020/t…
Snippet 2: On even simple benchmarks, neural networks are not only poor at generalizing to OOD but also degrade in their uncertainty estimates.
Dec 6, 2020 • 4 tweets • 2 min read
Tomorrow @latentjasper@balajiln and I present a #NeurIPS2020 tutorial on "Practical Uncertainty Estimation and Out-of-Distribution Robustness in Deep Learning". Whether you're new to the area or an expert, there is critically useful info! 8-10:30a PT nips.cc/virtual/2020/t…
The talk is split into three sections: 1. Why Uncertainty & Robustness; 2. Foundations; and 3. Recent Advances.
Sep 7, 2020 • 4 tweets • 2 min read
How I spent this weekend: upgrading my battlestation.
Cable management is a huge quality of life improvement.
Dec 13, 2019 • 6 tweets • 2 min read
Check out BatchEnsemble: Efficient Ensembling with Rank 1 Perturbations at the #NeurIPS2019 Bayesian DL workshop. Better accuracies and uncertainty than dropout and competitive with ensembles across a wide range of tasks. 1/-
It’s a drop in replacement for individual layers, like dropout, batchnorm, and variational layers and is available with baselines and Bayesian layers at github.com/google/edward2. 2/-