Jon Barron Profile picture
AI researcher at Google DeepMind. Synthesized views are my own.
2 subscribers
Feb 18 14 tweets 7 min read
I just pushed a new paper to arXiv. I realized that a lot of my previous work on robust losses and nerf-y things was dancing around something simpler: a slight tweak to the classic Box-Cox power transform that makes it much more useful and stable. It's this f(x, λ) here: Here's an explainer video of my first paper in this line, before I understood things fully: youtube.com/watch?v=BmNKbn…. This new paper (arxiv.org/abs/2502.10647) broadens the same idea to a wider range of things: curves, losses, kernels, PDFs, bumps, and activation functions.
Mar 3, 2022 5 tweets 3 min read
Very glad I can finally talk about our newly-minted #CVPR2022 paper. We extended mip-NeRF to handle unbounded "360" scenes, and it got us ~photorealistic renderings and beautiful depth maps. Explainer video: and paper: arxiv.org/abs/2111.12077 We "contract" Euclidean space into a bounded domain, which gets hard because we need to warp the mip-NeRF Gaussians that model 3D volumes of space. The trick for making this work is linearizing the contraction (thanks JAX!) and using the same math as an extended Kalman filter.