How can robots generalize to new environments & tasks?

We find that using in-the-wild videos of people can allow learned reward functions to do so!
Paper: arxiv.org/abs/2103.16817

Led by @_anniechen_, @SurajNair_1
🧵(1/5)
To get reward functions that generalize, we train domain-agnostic video discriminators (DVD) with:
* a lot of diverse human data, and
* a narrow & small amount of robot demos

The idea is super simple: predict if two videos are performing the same task or not.
(2/5) Image
This discriminator can be used as a reward by feeding in a human video of the desired task and a video of the robot’s behavior.

We use it by planning with a learned visual dynamics model.
(3/5) Image
Does using human videos improve reward generalization compared to using only narrow robot data?

We see:
* 20% greater task success in new environments
* 25% greater task success on new tasks
both in simulation and on a real robot.

(4/5) Image
For more, check out:
Paper: arxiv.org/abs/2103.16817
Website: sites.google.com/view/dvd-human…
Summary video: drive.google.com/file/d/1WsOwgc…

I'm quite excited about how reusing broad datasets can help robots generalize, and this project has been a great indication in that direction!

(5/5) Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chelsea Finn

Chelsea Finn Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chelseabfinn

8 Jul 20
Convolution is an example of structure we build into neural nets. Can we _discover_ convolutions & other symmetries from data?

Excited to introduce:
Meta-Learning Symmetries by Reparameterization
arxiv.org/abs/2007.02933

w/ @allan_zhou1 @TensorProduct @StanfordAILab
Thread👇
To think about this question, we first look at how equivariances are represented in neural nets.

They can be seen as certain weight-sharing & weight-sparsity patterns. For example, consider convolutions.
(2/8)
We reparametrize a weight matrix into a sharing matrix & underlying filter parameters

It turns out this can provably represent any equivariant structure + filter parameters, for all group-equivariant convolutions with finite groups.
(3/8)
Read 8 tweets
7 Jul 20
Supervised ML methods (i.e. ERM) assume that train & test data are from the same distribution, & deteriorate when this assumption is broken.

To help, we introduce adaptive risk minimization (ARM):
arxiv.org/abs/2007.02931

With M Zhang, H Marklund @abhishekunique7 @svlevine
(1/6)
Prior works on dIstributionally-robust optimization (DRO) aim to be _robust_ to distribution shift.

Group DRO aims for robustness to shifts in groups underlying the dataset. (e.g. see arxiv.org/abs/1611.02041)
(2/6)
Recently, this paper showed promising results on group DRO with neural nets:
arxiv.org/abs/1911.08731

However, DRO methods often trade-off between robustness & test-time performance.
(3/6)
Read 6 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!

Follow Us on Twitter!