New workshop at @icmlconf: Updatable Machine Learning (UpML 2022)!
Training models from scratch is expensive. How can we update a model post-deployment but avoid this cost? Applications to unlearning, domain shift, & more!
Ft stellar lineup of invited speakers including: Chelsea Finn (@chelseabfinn), Shafi Goldwasser, Zico Kolter (@zicokolter), Nicolas Papernot (@NicolasPapernot), & Aaron Roth (@Aaroth)
They've studied UpML in a variety of contexts, including unlearning, robustness, fairness (2/3)
Workshop is co-organized with lead organizer Ayush Sekhari (@ayush_sekhari) and Jayadev Acharya (@AcharyaJayadev), and supported by an excellent program committee (being finalized).
We look forward to seeing your best work in this emerging area! (3/3)
🧵One of the important principles in technical communication (i.e., writing a paper, giving a talk) of complex ideas is *organization*.
That is, making it clear what the major components are & how they fit together. If you do this well you are probably 90% of the way there. (1/n)
A "top down" approach is usually the tried and true method. First communicate the high-level ideas/steps, before delving into their details. This is good because it's "truncatable": at some point "down the tree" it's ok if the audience misses a step (and they know this). (2/n)
On the other hand, if this is not clear, the audience must maintain constant attention. They don't know what's important and what's not, so they can't miss a thing. And everyone gets lost in a technical talk/paper sometimes, so this is key. (3/n)
How many of your research papers do you think will be relevant a year from now? Five years from now? 100 years from now? How do you feel about your answer?
I think most works written right now will not be too relevant in a decade. And that's totally fine! Maybe it has its day in the sun, is interesting for a bit, may be useful if you're lucky, but then the community progresses. Hopefully it inspires new ideas in others along the way
New paper on arXiv: "Efficient Mean Estimation with Pure Differential Privacy via a Sum-of-Squares Exponential Mechanism," with @Samuel_BKH and @mahbodm_.
Finally resolves an open problem on my mind since April 2019 (~2.5 years ago).
We give the 1st algorithm for mean estimation which is simultaneously:
-(ε,0)-differentially private
-O(d) sample complexity
-poly time
The fact we didn't have such an algorithm before indicates something was missing in our understanding of multivariate private estimation. 2/n
This algorithm is an instance of a broader framework which employs Sum-of-Squares for private estimation. This is the first application of SoS for DP that I am aware of. We apply this framework for two sub-problems, I'm sure there are more applications lurking. 3/n
(Thread) How can I say no to this? My course will be made public! Stay tuned.
I was pulling your leg here. Here's my (hot? cold?) take: academics have a *moral obligation* to make as much of their material public as possible. Not just papers, but code, talks, and lectures. 1/n
95% of the work for most content is preparing it. So why not present it so the whole world can appreciate it? arXiv, Github, and video sites.
All academics should learn how to record videos in high quality, say using @OBSProject (not just Zoom). It's imperative for the future 2/n