,
11 tweets,
3 min read
Read on Twitter

We wrote a paper proposing a new relaxation of differential privacy that has lots of nice properties: arxiv.org/abs/1905.02383 It's 85 pages long, so here is the TL;DR. Suppose S is a dataset with your data, and S' is the dataset with your data removed. 1/

Differential privacy can be viewed as promising that any hypothesis test aiming to distinguish whether I used S vs. S' in my computation that has false positive rate alpha must have a true positive rate of at most e^eps*alpha+delta. Cool - its easy to interpret this guarantee! 2/

This characterization is exact: its equivalent to the usual definition of DP. But DP is mis-parameterized in that when we compose mechanisms, there is no way to describe the resulting tradeoff between Type I and type II errors with any parameters eps,delta anymore. 3/

This is fundamentally the reason why composition theorems for DP are not tight. Even "optimal" composition (which is #P hard) is only providing a bound on the correct tradeoff between Type I and Type II error. (The tightest bound parameterized by eps,delta, but still loose) 4/

So: We propose describing privacy guarantees with a function f instead of 2 parameters. The function describes the optimal tradeoff between Type I and Type II error. This is expressive enough to exactly capture composition, and admits a simple calculus for composition. 5/

It turns out this way of describing a privacy guarantee is "dual" to describing it with an infinite collection of (eps,delta)-DP guarantees. You can switch back and forth via the convex conjugate of the function f. This is useful for a couple of reasons. Most notably, 6/

It provides a way to import known results from the DP literature into this new framework. This is how we get "privacy amplification by sub-sampling" for this new family of definitions --- something that has proven challenging for other proposed relaxations of DP. 7/

Isn't keeping track of functions complex? Yes, but there is a "central limit theorem". In the limit under composition, no matter what your functions f looked like, the privacy guarantee converges to the tradeoff function for testing two standard Gaussians with shifted means. 8/

This family of functions has only one parameter (the gap in the means), and has a simple additive composition rule. We call it "Gaussian Differential Privacy". The CLT means that it is the -only- hypothesis testing based definition of privacy tightly closed under composition. 9/

It also provides an analytic tool. Too hard to reason about the composition of many functions? Compute the Gaussian-DP parameter using our central limit theorem instead. Convergence is fast: after 10 compositions, its hard to distinguish the CLT bound from the true bound. 10/

I think its pretty neat. I can say that, because all credit for this work goes to Jinshuo Dong. If you are at the Simons workshop today, come and ask him questions about it. He's speaking at 11:30. 11/11

Missing some Tweet in this thread?

You can try to force a refresh.

You can try to force a refresh.

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll"
`@threadreaderapp unroll`

You can practice here first or read more on our help page!

Alexander Friedmann submitted "On the Curvature of Space" to Zeitschrift für Physik #OTD in 1922, describing a homogeneous and isotropic universe that expands over time according to its matter & energy content. It's the arena for all of modern cosmology.

link.springer.com/article/10.102…

link.springer.com/article/10.102…

The Instituut-Lorentz has scans of his original manuscript, with handwritten notes, as well as some of his correspondence with Ehrenfest and others.

lorentz.leidenuniv.nl/history/Friedm…

lorentz.leidenuniv.nl/history/Friedm…

When speaking of his pioneering work applying general relativity to cosmology, Friedmann was said to frequently quote Dante: "The waters I am entering, no one yet has crossed."

arxiv.org/abs/1302.1498v1

arxiv.org/abs/1302.1498v1

If you have an interest in #AI, #HealthTech or #PatientSafety - then please read this evidence based thread which tells the story of @DrMurphy11 & an #eHealth #AI #Chatbot.

Read on, or see single tweet summary here👇0/44

Read on, or see single tweet summary here👇0/44

is not a 'nerd'; he's a pretty typical NHS consultant with an interest in #PatientSafety.

On 6th Jan 2017 a tweet about an #eHealth #AI #Chatbot in #NHS trials caught his attention, so he thought he'd take a look. 1/44

Evidence here 👇

On 6th Jan 2017 a tweet about an #eHealth #AI #Chatbot in #NHS trials caught his attention, so he thought he'd take a look. 1/44

Evidence here 👇

Dr Murphy downloaded the @babylonhealth App & tried the #Chatbot with a few simple clinical presentations. It quickly became apparent that the #Chatbot had flaws, raising the question if the App had been validated as a triage tool?

As evidenced by👇 2/44

As evidenced by👇 2/44

New blog post summarizing recent work by grad student Pengfei Li. tritonstation.wordpress.com/2018/06/14/rar…

The scientific version of the paper is in press at A&A; you can get the preprint at arxiv.org/abs/1803.00022

Thread on how to review papers about generic improvements to GANs

There are a lot of papers about theoretical or empirical studies of how GANs work, papers about how to do new strange and interesting things with GANs (e.g. the first papers on unsupervised translation), new metrics, etc. This thread isn't about those.

There are also a lot of papers about GANs as part of a larger system, like GANs for semi-supervised learning, differential privacy, dataset augmentation, etc. This thread also isn't about those---evaluate them in terms of the larger system's application area.

Gravitational corrections to the value of the muon #gminus2 - a quick roundup.

It all started with this paper - arxiv.org/abs/1801.10246 - the 3rd in a trilogy. Claimed that a correction due to Earth's gravity should alleviate tension between theory and experiment:

It all started with this paper - arxiv.org/abs/1801.10246 - the 3rd in a trilogy. Claimed that a correction due to Earth's gravity should alleviate tension between theory and experiment:

In a blog post - motls.blogspot.nl/2018/02/experi… - @lumidek points out (among other things) that if you're considering corrections due to the gravitational potential, the Sun's potential should dominate, not the Earth's

On twitter (and in a blog post) - realselfenergy.blogspot.nl/2018/02/update… - @PitifulRed offers a short proof of why the gravitational correction must be zero.

DeepMind has generalized AlohaGo Zero to also work with Chess and Shogi. Learns superhuman performance through self-play within hours. Very nice work.

arxiv.org/abs/1712.01815

BUT... (See next tweet)

arxiv.org/abs/1712.01815

BUT... (See next tweet)

BUT before you write that headline about how these results show that we have general AI through self-learning, remember:

1. Different networks were trained for each game. The Chess network does not play Go.

2. The networks differ not only in training, but in topology.

1. Different networks were trained for each game. The Chess network does not play Go.

2. The networks differ not only in training, but in topology.

This is because the board state of the different games needs to be represented differently. We need different features for Chess compared to Go, because there are more pieces. So each game requires human engineering.