My Authors
Read all threads
Supervised ML methods (i.e. ERM) assume that train & test data are from the same distribution, & deteriorate when this assumption is broken.

To help, we introduce adaptive risk minimization (ARM):
arxiv.org/abs/2007.02931

With M Zhang, H Marklund @abhishekunique7 @svlevine
(1/6)
Prior works on dIstributionally-robust optimization (DRO) aim to be _robust_ to distribution shift.

Group DRO aims for robustness to shifts in groups underlying the dataset. (e.g. see arxiv.org/abs/1611.02041)
(2/6)
Recently, this paper showed promising results on group DRO with neural nets:
arxiv.org/abs/1911.08731

However, DRO methods often trade-off between robustness & test-time performance.
(3/6)
We aim to get *both* robustness and performance.

To do so, we introduce a new assumption to tackling group shift — that we get the unlabeled test data at once in a batch.

This resembles domain adaptation, except that the unlabeled test data is only available at test time.
(4/6)
Key idea: Rather than trying to be robust, we train a meta-learning algorithm to *adapt* to group distribution shift with unlabeled data.

(5/6) Visualization of the meta-learning process used in adaptive
This generally leads to higher average-case and worst-case performance on both robustness benchmarks & a federated learning setting.
(6/6) Results from the adaptive risk minimization (ARM) paper
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Chelsea Finn

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!