Chelsea Finn Profile picture
Feb 9 7 tweets 4 min read
What should ML models do when there's a *perfect* correlation between spurious features and labels?

This is hard b/c the problem is fundamentally _underdefined_

DivDis can solve this problem by learning multiple diverse solutions & then disambiguating
arxiv.org/abs/2202.03418
🧵
Prior works have made progress on robustness to spurious features but also have important weaknesses:
- They can't handle perfect/complete correlations
- They often need labeled data from the target distr. for hparam tuning
DivDis can address both challenges, using 2 stages:
1. The Diversify stage learns multiple functions that minimize training error but have differing predictions on unlabeled target data
2. The Disambiguate stage uses a few active queries to identify the correct function
I'm super excited about DivDis for a few reasons.

First, it can start to address underspecified problems with perfect spurious correlations, with mild assumptions.

It can also combat simplicity bias when the spurious feature is much simpler than the core feature
Second, it yields good performance even when hyperparameters are tuned on held-out data from the training distribution
Third, it conceptually addresses a problem that Bayesian NNs & ensembles struggle with.

By leveraging unlabeled data from the target distribution (the transductive setting), it can cover the space of relevant solutions much more effectively.
Finally, this was also a problem I was puzzled by a year ago, and it's awesome to have an initial solution to the puzzle. :)

DivDis Paper: arxiv.org/abs/2202.03418
Website: sites.google.com/view/diversify…

Led by @yoonholeee with @HuaxiuYaoML

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chelsea Finn

Chelsea Finn Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chelseabfinn

Oct 22, 2021
Large language models (LLMs) often make mistakes that are difficult to correct.

We study the problem of quickly editing these models:
Paper: arxiv.org/abs/2110.11309
Code: github.com/eric-mitchell/…

w/ @_eric_mitchell_, C. Lin, @ABosselut, @chrmanning

thread 🧵👇
We assume a pre-trained model & a dataset that covers many possible model edits

Then, we meta-train a model editor that predicts a model update that:
- edits the model
- otherwise keeps the model behavior the same

(2/4)
You can train model editors for massive models (e.g. GPT-J, T5-11B) in <1 day on a single GPU.

Edits with the resulting model editor are extremely fast, with edit success rate of 80-90%.
(3/4)
Read 4 tweets
Sep 22, 2021
RL methods so often learn from _scratch_. Can they leverage offline experience from previous tasks?

They can. And if they do, they will learn new tasks ~2x faster.

Paper: arxiv.org/abs/2109.09180
Website: sites.google.com/view/retain-ex…

Led by Annie Xie. 🧵👇(1/4) Lifelong Robotic Reinforcement Learning by Retaining Experie
Many prior transfer learning methods try to transfer weights, e.g through fine-tuning.

We consider whether we can also transfer past *experiences*, rather than throwing away the prior data.
(2/4) A diagram of the method, which restores replay buffers from
Retaining & filtering experiences performs *substantially* better than fine-tuning & other prior methods

It also outperforms learning from scratch, even when learning from scratch with 2x the data.
(3/4) Table of results, showing substantially better performance f
Read 4 tweets
Jul 20, 2021
Thrilled to share new work on AI for education: can we give detailed, high-quality feedback to students?

Post: ai.stanford.edu/blog/prototran…
NYT Coverage: nytimes.com/2021/07/20/tec…

A collab w. the amazing @mike_h_wu @chrispiech & co 🧵
2/ Student feedback is a fundamental problem in scaling education.

Providing good feedback is hard: existing approaches provide canned responses, cryptic error messages, or simply provide the answer.
3/ Providing feedback is also hard for ML: not a ton of data, teachers frequently change their assignments, and student solutions are open-ended and long-tailed.

Supervised learning doesn’t work. We weren’t sure if this problem can even be solved using ML.
Read 12 tweets
Apr 1, 2021
How can robots generalize to new environments & tasks?

We find that using in-the-wild videos of people can allow learned reward functions to do so!
Paper: arxiv.org/abs/2103.16817

Led by @_anniechen_, @SurajNair_1
🧵(1/5)
To get reward functions that generalize, we train domain-agnostic video discriminators (DVD) with:
* a lot of diverse human data, and
* a narrow & small amount of robot demos

The idea is super simple: predict if two videos are performing the same task or not.
(2/5) Image
This discriminator can be used as a reward by feeding in a human video of the desired task and a video of the robot’s behavior.

We use it by planning with a learned visual dynamics model.
(3/5) Image
Read 5 tweets
Jul 8, 2020
Convolution is an example of structure we build into neural nets. Can we _discover_ convolutions & other symmetries from data?

Excited to introduce:
Meta-Learning Symmetries by Reparameterization
arxiv.org/abs/2007.02933

w/ @allan_zhou1 @TensorProduct @StanfordAILab
Thread👇
To think about this question, we first look at how equivariances are represented in neural nets.

They can be seen as certain weight-sharing & weight-sparsity patterns. For example, consider convolutions.
(2/8)
We reparametrize a weight matrix into a sharing matrix & underlying filter parameters

It turns out this can provably represent any equivariant structure + filter parameters, for all group-equivariant convolutions with finite groups.
(3/8)
Read 8 tweets
Jul 7, 2020
Supervised ML methods (i.e. ERM) assume that train & test data are from the same distribution, & deteriorate when this assumption is broken.

To help, we introduce adaptive risk minimization (ARM):
arxiv.org/abs/2007.02931

With M Zhang, H Marklund @abhishekunique7 @svlevine
(1/6)
Prior works on dIstributionally-robust optimization (DRO) aim to be _robust_ to distribution shift.

Group DRO aims for robustness to shifts in groups underlying the dataset. (e.g. see arxiv.org/abs/1611.02041)
(2/6)
Recently, this paper showed promising results on group DRO with neural nets:
arxiv.org/abs/1911.08731

However, DRO methods often trade-off between robustness & test-time performance.
(3/6)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

:(