Pitch: understanding universal consistency of the k-NN classifier under Wasserstein distances!
1/4
In more detail: A binary classifier g(Dₙ) trained on dataset Dₙ is universally consistent (UC) if the error proba Pₓ,ᵧ(g(Dₙ)(X)≠Y|Dₙ) converges to the Bayes risk as n→∞, regardless of the joint distribution of X and Y. This paper studies the universal consistency...
2/4
... of the k-NN classifier on spaces of proba measures under p-Wasserstein distance. From studying geometric properties of Wasserstein spaces, we show that the k-NN classifier is (1) UC for any p≥1 on the space of measures finitely supported in ℚᵈ with rational masses...
3/4
... and (2) not UC for any p≥1 on the space of measures supported in (0,1) and (3) universally consistent for p=2 on the space of d-dimensional Gaussian measures.
My first foray into shuffle #privacy, spearheaded by my impressive winter research intern Hongyi Lyu (maths undergrad @UniMelb), who managed to learn about DP+shuffle DP+distribution testing, all this in ~6 weeks!
We also use the recent amplification theorem of @vitalyFM, McMillan, and @_kunal_talwar_ (local privacy to shuffle privacy) to provide an alternative algo:
Greedy #algorithms can be surprisingly powerful, on top of being very often quite intuitive and natural. (Of course, sometimes their *analysis* can be complicated... but hey, you do the analysis only once, but run the algo forever after!)
Let's have a look at a few examples.
2/
Q1 asked about which problem was *not* (known to) have an efficient, optimal greedy solution.
53.6% of you got it right: Set Cover. Well, one reason is because it's NP-hard, so that doesn't help (greedy or not)... en.wikipedia.org/wiki/Set_cover…
Note that this is true for only two r.v.'s: if X~X' and Y~Y'
𝔼[X+Y]=𝔼[X'+Y']
whenever 𝔼[X+Y] is defined (regardless of whether 𝔼[X] and 𝔼[Y] are defined). (cf. below from G. Simons (1977)).
Turns out, this fails for 3 r.v.'s (same paper by Simons, and Counterexample 6.1 in Stoyanov's book).
Sometimes 𝔼[X+Y+Z], 𝔼[X'+Y'+Z'] both exist, yet 𝔼[X+Y+Z]≠𝔼[X'+Y'+Z'].
(Again, for 2 r.v.'s the result holds: if 𝔼[X+Y], 𝔼[X'+Y'] both exist, then 𝔼[X+Y]=𝔼[X'+Y'].)
Here, p and ε are known parameters (inputs), the goal is to be correct with probability at least 99/100 (over the random coin flips). As we will see, 99/100 is sort of arbitrary, any constant in 1/2 < c < 1 would work.
Also, warning: I'm going to give DETAILS.
2/
Let's start with Q1, the "fair" vs. ε-biased coin setting (p=1/2). As more than 66% of you answered, for Q1. the number of coin flips necessary and sufficient is then Θ(1/ε²). Why?
One way to think about it to gain intuition is "signal to noise."
Stuff I wish I had known sooner: "Pinsker's inequality is cancelled," a thread. 🧵
If you want to relate total variation (TV) and Kullback-Leibler divergence (KL), then everyone, from textbooks to Google, will point you to Pinsker's inequality: TV(p,q) ≤ √(½ KL(p||q) )
1/
It's warranted: a great inequality, and tight in the regime where KL→0. Now, the issue is that KL is unbounded, while 0≤TV≤1 always, so the bound becomes vacuous when KL > 2. Oops.
2/
*I'm using it with the natural log, by the way, so no pesky log 2 constants.
This is annoying, because in many situations, you would want to bound a TV close to 1: for instance, in hypothesis testing: "how many samples to distinguish between an ε-biased and a fair coin, with probability 1-δ?"
Well, you'd like to start bounding 1-δ ≤ TV ≤ ?.
3/
📊 Answers and discussion for yesterday's quiz on sequences (and their names).
Before we go to the first question, again: if you haven't, bookmark oeis.org. It's useful: the solution to some of your problems may be recorded there!
... ascent sequences avoiding the pattern 201 (oeis.org/A202062)? Or the number of set partitions of [n] that avoid 3-crossings (oeis.org/A108304)? Who am I to judge?