Peyman Milanfar Profile picture
Mar 29, 2020 6 tweets 3 min read Read on X
(1/5) One of the most surprising and little-known results in classical statistics is the relationship between the mean, median, and standard deviation. If the distribution has finite variance, then the distance between the median and the mean is bounded by one standard deviation. Image
(2/5) We assigned this as a HW exercise in a class I taught as a grad student at MIT circa 1991

Coincidentally, it was written up around the same time by C. Mallows in "Another comment on O'Cinneide" The American Statistician, 45-3

Proof is easy using Jensen's inequality twice: Image
(3/5) If the distribution is unimodal, the bound is even tighter.
epubs.siam.org/doi/10.1137/S0…
Image
(4/5) What about in higher dimensions?

Yes, defining the median appropriately, that works too: median here is the "spatial median": the (unique) point m minimizing the sum of distances E(|x-m|-|x|) to the sample points.

The result appears in this book:
amazon.com/Random-Vectors…

Image
Image
(5/5) Results like this are not just curiosities, but quite useful in practice as they allow estimates of one quantity given the other two in a distribution-free manner. This is important in meta-analyses of studies in biomedical sciences etc

(Open Access) ncbi.nlm.nih.gov/pmc/articles/P…
Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Peyman Milanfar

Peyman Milanfar Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @docmilanfar

Dec 20
Years ago when my wife and I we were planning to buy a home, my dad stunned me with a quick mental calculation of loan payments.

I asked him how - he said he'd learned the strange formula for compound interest from his father, who was a merchant in 19th century Iran.

1/4 Image
The origins of the formula my dad knew is a mystery, but I know it has been used in the bazaar's of Iran (and elsewhere) for as long as anyone can remember

It has an advantage: it's very easy to compute on an abacus. The exact compounding formula is much more complicated

2/4 Image
I figured out how the two formulae relate: the historical formula is the Taylor series of the exact formula around r=0.

But the crazy thing is that the old Persian formula goes back 100s (maybe 1000s) of years before Taylor's, having been passed down for generations

3/4 Image
Read 4 tweets
Dec 8
How are Kernel Smoothing in statistics, Data-Adaptive Filters in image processing, and Attention in Machine Learning related?

My goal is not to argue who should get credit for what, but to show a progression of closely related ideas over time and across neighboring fields.

1/n Image
In the beginning there was Kernel Regression - a powerful and flexible way to fit an implicit function point-wise to samples. The classic KR is based on interpolation kernels that are a function of the position (x) of the samples and not on the values (y) of the samples.

2/n Image
Instead of a fixed smoothing parameter h, we can adjusted it dynamically based on the local density of samples near the point of interest. This enables accounting for variations in the spatial distribution of samples, but doesn't take into account of the values of samples

3/n Image
Read 11 tweets
Dec 3
“On a log-log plot, my grandmother fits on a straight line.”
-Physicist Fritz Houtermans

There's a lot of truth to this. log-log plots are often abused and can be very misleading

1/5 Image
A plot of empirical data can reveal hidden phenomena or scaling. An important and common model is to look for power laws like

p(x) ≃ L(x) xᵃ

where L(x) is slowly varying, so that xᵃ is dominant

Power laws appear all over physics, biology, math, econ. etc., however...

2/5
...just because on a log-log plot your data looks like a line, you can't conclude that you're looking at a power law

In fact, a roughly straight behavior on a log-log scale is like a necessary condition, but it is not sufficient for power-law behavior. Take this example:

3/5 Image
Read 5 tweets
Nov 10
Integral geometry is a beautiful topic bridging geometry, probability & statistics

Say you have a curve with any shape, possibly even self-intersecting. How can you measure its length?

This has many applications - curve could be a strand of DNA or a twisted length of wire

1/n Image
A curve is a collection of tiny segments. Measure each segment & sum. You can go further: make the segments so small they are essentially points, count the red points

A practical way to do this: drop many lines, or a dense grid, intersecting the shape & count intersections

2/n Image
Curve's length is the sum of intersections n(ρ,θ) of all lines (in polar coords) with the curve (counting multiplicities). This is the beautiful Crofton formula:

Length = 1/2 ∫∫ n(ψ,p) dψ dp

The 1/2 is there because oriented lines are a double cover of un-oriented lines

3/n Image
Read 6 tweets
Sep 24
Smoothing splines fit function to data as the sol'n of a regularized least-squares optimization problem.

But it’s also possible to do it in one shot with an unusually shaped kernel (see figure)

Is it possible to solve other optimization problems this way? Surprisingly yes

1/n Image
This is just one instance of how one can “kernelize” an optimization problem. That is, approximate the solution of an optimization problem in just one-step by constructing and applying a kernel once to the input

Given some conditions you can it do much more generally

2/n Image
If you specialize the regularization to be of the form
φ(x) = ρ( ||Ax|| ) where A= R(|i-j|) is a stationary & isotropic, this gives tidy conversions between φ(x) and the kernel K(x).

3/n Image
Read 4 tweets
Sep 18
Mean-shift iteratively moves points towards regions of higher density. It does so by placing a kernel at each data point, calculating the mean of the data points within that window, shifting points towards this mean until convergence: Look familiar?

1/n
(Animation @gabrielpeyre)
The first term on the right hand side of the ODE has the form of a pseudo-linear denoiser f(x) = W(x) x. A weighted average of the points where the weights depend on the data. The overall mean-shift process is a lot like a residual flow:

d/dt x(t) = f(x(t)) - x(t)

2/n Image
Residual on the RHS is an approximation of the “score” -the gradient of the empirical density of x making it a gradient flow

d/dt x(t) ≈ ∇ log p̂(x(t))

So mean-shift a) estimates the empirical density & b) flows points to nearby peaks. Similarly to flow-matching & InDI

3/n
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(