Gautam Kamath Profile picture
Nov 30, 2021 6 tweets 9 min read Read on X
Paper awards for @NeurIPSConf have been announced!🎉#NeurIPS2021 blog.neurips.cc/2021/11/30/ann…

Congrats to all the winners, I'll link to the Outstanding Paper Awards 🧵

1. A Universal Law of Robustness via Isoperimetry, by @SebastienBubeck & @geoishard.

arxiv.org/abs/2105.12806 (1/n)
Outstanding Paper Award 2. On the Expressivity of Markov Reward, by @dabelcs, @wwdabney, @aharutyu, @Mark_Ho_, @mlittmancs, Doina Precup, and Satinder Singh.

arxiv.org/abs/2111.00876 (2/n)
Outstanding Paper Award 3. Deep Reinforcement Learning at the Edge of the Statistical Precipice, by @agarwl_, @max_a_schwarzer, @pcastr, @AaronCourville, @marcgbellemare.

arxiv.org/abs/2108.13264 (3/n)
Outstanding Paper Award 4. MAUVE: Measuring the Gap Between Neural Text and Human Text using Divergence Frontiers, by @KrishnaPillutla, @swabhz, @rown, @jwthickstun, @wellecks, @YejinChoinka, Zaid Harchaoui

arxiv.org/abs/2102.01454 (4/n)
5. Continuized Accelerations of Deterministic and Stochastic Gradient Descents, and of Gossip Algorithms, by Mathieu Even, @RaphalBerthier1, @BachFrancis, Nicolas Flammarion, Hadrien Hendrikx, Pierre Gaillard, Laurent Massoulié, Adrien Taylor

arxiv.org/abs/2106.07644 (5/n)
Outstanding Paper Award 6. Moser Flow: Divergence-based Generative Modeling on Manifolds By Noam Rozen, @adityagrover_, @mnick, and @lipmanya.

arxiv.org/abs/2108.08052 (6/n)

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Gautam Kamath

Gautam Kamath Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @thegautamkamath

Oct 21, 2022
It's again time to talk Canada. 🇨🇦

CS grad school and faculty app deadlines are coming up soon. If you are applying to the US, you should also be applying to Canada. The two are far more similar than different.

Ask me anything about Canada in this🧵, and I'll answer honestly. Image
A good place to start is my thread and AMA from last year. In short, Canada and US are incredibly similar in several dimensions, including culturally and geographically. We talk everything from admission requirements to money.
If you have questions, you can
a) reply on Twitter
b) send me a DM
c) send me an anonymous message (docs.google.com/forms/d/e/1FAI……).
Read 26 tweets
Jul 6, 2022
🧵Fields medalist June Huh shares an early math experience: a chess puzzle in the game "The 11th Hour." Story and figures from nytimes.com/live/2022/07/0….

Can you swap the positions of the black and white knights? Seems hard, right? A new perspective makes it almost trivial! 1/n
We're going to define a graph over the (irregular) chess board. First of all, let's number the squares to give them names. 2/n
The irregular shape makes the number of moves from each square limited. For example, from square 5, the only valid moves are to 1 and 7. Therefore, in the graph of valid moves, the neighbourhood of 5 would look like this: 3/n
Read 6 tweets
Apr 5, 2022
New workshop at @icmlconf: Updatable Machine Learning (UpML 2022)!

Training models from scratch is expensive. How can we update a model post-deployment but avoid this cost? Applications to unlearning, domain shift, & more!

Deadline May 12
upml2022.github.io #ICML2022 (1/3)
Ft stellar lineup of invited speakers including: Chelsea Finn (@chelseabfinn), Shafi Goldwasser, Zico Kolter (@zicokolter), Nicolas Papernot (@NicolasPapernot), & Aaron Roth (@Aaroth)
They've studied UpML in a variety of contexts, including unlearning, robustness, fairness (2/3)
Workshop is co-organized with lead organizer Ayush Sekhari (@ayush_sekhari) and Jayadev Acharya (@AcharyaJayadev), and supported by an excellent program committee (being finalized).

We look forward to seeing your best work in this emerging area! (3/3)
Read 4 tweets
Mar 7, 2022
🧵One of the important principles in technical communication (i.e., writing a paper, giving a talk) of complex ideas is *organization*.
That is, making it clear what the major components are & how they fit together. If you do this well you are probably 90% of the way there. (1/n)
A "top down" approach is usually the tried and true method. First communicate the high-level ideas/steps, before delving into their details. This is good because it's "truncatable": at some point "down the tree" it's ok if the audience misses a step (and they know this). (2/n)
On the other hand, if this is not clear, the audience must maintain constant attention. They don't know what's important and what's not, so they can't miss a thing. And everyone gets lost in a technical talk/paper sometimes, so this is key. (3/n)
Read 8 tweets
Jan 2, 2022
How many of your research papers do you think will be relevant a year from now? Five years from now? 100 years from now? How do you feel about your answer?
Thinking back to the time in 2017 (arxiv.org/abs/1704.03866) we used a result from a 120 year old math paper written in German (degruyter.com/document/doi/1…). Though we actually trusted an English language simplification... from 1941 (projecteuclid.org/journals/bulle…).
I think most works written right now will not be too relevant in a decade. And that's totally fine! Maybe it has its day in the sun, is interesting for a bit, may be useful if you're lucky, but then the community progresses. Hopefully it inspires new ideas in others along the way
Read 6 tweets
Nov 30, 2021
New paper on arXiv: "Efficient Mean Estimation with Pure Differential Privacy via a Sum-of-Squares Exponential Mechanism," with @Samuel_BKH and @mahbodm_.

Finally resolves an open problem on my mind since April 2019 (~2.5 years ago).

arxiv.org/abs/2111.12981
🧵Thread ⬇️ 1/n
We give the 1st algorithm for mean estimation which is simultaneously:
-(ε,0)-differentially private
-O(d) sample complexity
-poly time
The fact we didn't have such an algorithm before indicates something was missing in our understanding of multivariate private estimation. 2/n
This algorithm is an instance of a broader framework which employs Sum-of-Squares for private estimation. This is the first application of SoS for DP that I am aware of. We apply this framework for two sub-problems, I'm sure there are more applications lurking. 3/n
Read 15 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(