Note that this is true for only two r.v.'s: if X~X' and Y~Y'
𝔼[X+Y]=𝔼[X'+Y']
whenever 𝔼[X+Y] is defined (regardless of whether 𝔼[X] and 𝔼[Y] are defined). (cf. below from G. Simons (1977)).
Turns out, this fails for 3 r.v.'s (same paper by Simons, and Counterexample 6.1 in Stoyanov's book).
Sometimes 𝔼[X+Y+Z], 𝔼[X'+Y'+Z'] both exist, yet 𝔼[X+Y+Z]≠𝔼[X'+Y'+Z'].
(Again, for 2 r.v.'s the result holds: if 𝔼[X+Y], 𝔼[X'+Y'] both exist, then 𝔼[X+Y]=𝔼[X'+Y'].)
Here is the paper in question, referenced in Stoyanov's book.
X,Y,Z are dependent and defined such that all three are marginally tan(πU/2) for U~Unif[0,1], but X+Y+Z=0. So 𝔼[X+Y+Z]=0.
X',Y',Z' have different dependence structure, but same marginals. 𝔼[X'+Y'+Z']=(4/π) log 2.
Here, p and ε are known parameters (inputs), the goal is to be correct with probability at least 99/100 (over the random coin flips). As we will see, 99/100 is sort of arbitrary, any constant in 1/2 < c < 1 would work.
Also, warning: I'm going to give DETAILS.
2/
Let's start with Q1, the "fair" vs. ε-biased coin setting (p=1/2). As more than 66% of you answered, for Q1. the number of coin flips necessary and sufficient is then Θ(1/ε²). Why?
One way to think about it to gain intuition is "signal to noise."
Stuff I wish I had known sooner: "Pinsker's inequality is cancelled," a thread. 🧵
If you want to relate total variation (TV) and Kullback-Leibler divergence (KL), then everyone, from textbooks to Google, will point you to Pinsker's inequality: TV(p,q) ≤ √(½ KL(p||q) )
1/
It's warranted: a great inequality, and tight in the regime where KL→0. Now, the issue is that KL is unbounded, while 0≤TV≤1 always, so the bound becomes vacuous when KL > 2. Oops.
2/
*I'm using it with the natural log, by the way, so no pesky log 2 constants.
This is annoying, because in many situations, you would want to bound a TV close to 1: for instance, in hypothesis testing: "how many samples to distinguish between an ε-biased and a fair coin, with probability 1-δ?"
Well, you'd like to start bounding 1-δ ≤ TV ≤ ?.
3/
📊 Answers and discussion for yesterday's quiz on sequences (and their names).
Before we go to the first question, again: if you haven't, bookmark oeis.org. It's useful: the solution to some of your problems may be recorded there!
... ascent sequences avoiding the pattern 201 (oeis.org/A202062)? Or the number of set partitions of [n] that avoid 3-crossings (oeis.org/A108304)? Who am I to judge?
A paper with a broken and ill-defined "solution" to privacy in ML gets a $50k prize?! While said solution was already fully broken at the time of the award—on a challenge chosen by the authors?🤦♂️
The presentation slides must have been hella convincing.
Will Carlini et al. get a share of the prize money for breaking the system prior to the award ceremony? arxiv.org/abs/2011.05315
Disclaimer: I am in no way affiliated w/ any of the authors of the attack, nor w/ the authors of the original paper, nor w/ basically anyone involved. I'm just dismayed at the signal sent, & the overall waste—so much good work done elsewhere, and this is what gets the highlight.
📊 It's Thursday, time for our weekly (but weakly so) quiz! This week: integers, and remarkable sequences.
See this as an extended ad for the OEIS, if you will. Also, if you don't know the OEIS: go check it out (and come back)! oeis.org
1/
Let's start with a "fun" one. To be unhelpful, let's say as a hint that I'm expanding exp(e^x-1) as a series around 0, and am interested in the coefficients
Q1: Guess what number comes next—maybe this sequence will ring a bell? [There might be a trap]
1, 1, 2, 5, 15, 52, ??
2/
We won't do too many of those "guess what's next" games though, since they are mostly meaningless. But let's mention that, of course, @xkcdComic has a comic about integer sequences, because XKCD is basically Borges' library for the Internet. xkcd.com/2016/
3/
📊 It's Thursday for many, time for this week's quiz! As alluded earlier, we're going to flip coins. Like, a lot.
Today: you have a biased coin which lands heads with some probability p∊(0,1), and you can toss it as many times as you want. You don't know p, though...
1/10
... which is a real shame, since you want to simulate *another* biased coin which lands heads w/ proba exactly f(p), where f: (0,1)→(0,1) is fixed.
(Also, you'd like to do this efficiently, thank you very much.)
Let's start with a simple function: f(p)=1/2.
Q1: Can you?
2/10
OK, so that was the warmup. Let's formalize a bit more what "efficiently" means here. For us, it will mean (1) with a number of coins finite almost surely (no infinite sequence of tosses), and (2) with a finite automaton.