(1/5) One of the most surprising and little-known results in classical statistics is the relationship between the mean, median, and standard deviation. If the distribution has finite variance, then the distance between the median and the mean is bounded by one standard deviation.
(2/5) We assigned this as a HW exercise in a class I taught as a grad student at MIT circa 1991
Coincidentally, it was written up around the same time by C. Mallows in "Another comment on O'Cinneide" The American Statistician, 45-3
Yes, defining the median appropriately, that works too: median here is the "spatial median": the (unique) point m minimizing the sum of distances E(|x-m|-|x|) to the sample points.
(5/5) Results like this are not just curiosities, but quite useful in practice as they allow estimates of one quantity given the other two in a distribution-free manner. This is important in meta-analyses of studies in biomedical sciences etc
Smoothing splines fit function to data as the sol'n of a regularized least-squares optimization problem.
But it’s also possible to do it in one shot with an unusually shaped kernel (see figure)
Is it possible to solve other optimization problems this way? Surprisingly yes
1/n
This is just one instance of how one can “kernelize” an optimization problem. That is, approximate the solution of an optimization problem in just one-step by constructing and applying a kernel once to the input
Given some conditions you can it do much more generally
2/n
If you specialize the regularization to be of the form
φ(x) = ρ( ||Ax|| ) where A= R(|i-j|) is a stationary & isotropic, this gives tidy conversions between φ(x) and the kernel K(x).
Mean-shift iteratively moves points towards regions of higher density. It does so by placing a kernel at each data point, calculating the mean of the data points within that window, shifting points towards this mean until convergence: Look familiar?
1/n (Animation @gabrielpeyre)
The first term on the right hand side of the ODE has the form of a pseudo-linear denoiser f(x) = W(x) x. A weighted average of the points where the weights depend on the data. The overall mean-shift process is a lot like a residual flow:
d/dt x(t) = f(x(t)) - x(t)
2/n
Residual on the RHS is an approximation of the “score” -the gradient of the empirical density of x making it a gradient flow
d/dt x(t) ≈ ∇ log p̂(x(t))
So mean-shift a) estimates the empirical density & b) flows points to nearby peaks. Similarly to flow-matching & InDI
3/n
Random matrices are very important in modern statistics and machine learning, not to mention physics
A model about which much less is known is uniformly sampled matrices from the set of doubly stochastic matrices: Uniformly Distributed Stochastic Matrices
A thread -
1/n
First, what are doubly stochastic matrices?
Non-negative matrices whose row & column sums=1.
The set of doubly stochastic matrices is also known as the Birkhoff polytope: an (n−1)² dimensional convex polytope in ℝⁿˣⁿ with extreme points being permutation matrices.
2/n
The extreme points of the Birkhoff polytope (permutations) are sparse matrices, but a typical matrix sampled from inside the polytope is by contrast, very dense
Since rows and columns are exchangeable, the entries of a sampled matrix have the same marginal distribution.
can teach a lot about some complex ideas in modern machine learning including overfitting & double-descent.
Let's assume A is n-by-p. So we have n data points and p parameters
1/10
If n ≥ p (“under-fitting” or “over-determined" case) the solution is
x̃ = (AᵀA)⁻¹ Aᵀ y
But if n < p (“over-fitting” or “under-determined” case), there are infinitely many solutions that give *zero* training error. We pick min‖x‖² norm solution:
x̃ = Aᵀ(AAᵀ)⁻¹ y
2/10
In either case, the solution can be compactly written in terms of the SVD of A:
A = USVᵀ
where U & V are orthogonal matrices of size nxn & pxp, and S is nxp & contains i = 1 to k nonzero diag elements
Did you ever take a photo & wish you'd zoomed in more or framed better? When this happens, we just crop.
Now there's a better way: Zoom Enhance -a new feature my team just shipped on Pixel. Available in Google Photos under Tools, it enhances both zoomed & un-zoomed images
1/n
Zoom Enhance is our first im-to-im diffusion model designed & optimized to run fully on-device. It allows you to crop or frame the shot you wanted, and enhance it -after capture. The input can be from any device, Pixel or not, old or new. Below are some examples & use cases
2/n
Let's say you've zoomed to the max on your Pixel 8/9 Pro and got your shot; but you wish you could get a little closer. Now you can zoom in more, and enhance.