Chelsea Finn Profile picture
Apr 1, 2021 5 tweets 3 min read Read on X
How can robots generalize to new environments & tasks?

We find that using in-the-wild videos of people can allow learned reward functions to do so!
Paper: arxiv.org/abs/2103.16817

Led by @_anniechen_, @SurajNair_1
🧵(1/5)
To get reward functions that generalize, we train domain-agnostic video discriminators (DVD) with:
* a lot of diverse human data, and
* a narrow & small amount of robot demos

The idea is super simple: predict if two videos are performing the same task or not.
(2/5) Image
This discriminator can be used as a reward by feeding in a human video of the desired task and a video of the robot’s behavior.

We use it by planning with a learned visual dynamics model.
(3/5) Image
Does using human videos improve reward generalization compared to using only narrow robot data?

We see:
* 20% greater task success in new environments
* 25% greater task success on new tasks
both in simulation and on a real robot.

(4/5) Image
For more, check out:
Paper: arxiv.org/abs/2103.16817
Website: sites.google.com/view/dvd-human…
Summary video: drive.google.com/file/d/1WsOwgc…

I'm quite excited about how reusing broad datasets can help robots generalize, and this project has been a great indication in that direction!

(5/5) Image

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Chelsea Finn

Chelsea Finn Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @chelseabfinn

Jan 27, 2023
LLMs like ChatGPT are becoming more fluent – how can we detect if something was written by a language model or a human?

We developed DetectGPT: a method for detecting if a passage was written by a particular language model. Visualization showing a candidate passage going into DetectG
Why does this matter?

Large language models are already being used to:
* write news articles (sometimes with major errors!)
* cheat on homework

Can we help humans spot LLM-written text?

cnet.com/tech/cnet-is-t…
So, how does it work?

DetectGPT measures the probability that a model assigns to the written text and compares that to the prob it assigns to a modification of the text.

If the probability of the original is much higher than the modified text, it’s likely generated by the model A visualization of how generated examples are usually at max
Read 6 tweets
Oct 27, 2022
Common fine-tuning wisdom is to adapt the last layer or the entire neural net.

We find that, sometimes, fine-tuning *only* the first layers or middle layers works best.

Paper: arxiv.org/abs/2210.11466

A short 🧵 Figure 1 in the paper, showing that first-block fine-tuning
One of the most reliable ways to handle distr. shift is to fine-tune on a small amt. of data.

We find that the best layers to fine-tune depends on the *type* of shift!

Compared to fine-tuning the whole network, fine-tuning just one block achieves similar or higher accuracy. ⬇️ figure 2 in the paper
Why might this be the case?

We don't know.🤔

But, perhaps neural nets approximately invert the causal process & these distr shifts are changes in independent causal mechanisms.

This kind of analysis can probably also shed light on the nature of different kinds of distr shifts!
Read 4 tweets
Feb 9, 2022
What should ML models do when there's a *perfect* correlation between spurious features and labels?

This is hard b/c the problem is fundamentally _underdefined_

DivDis can solve this problem by learning multiple diverse solutions & then disambiguating
arxiv.org/abs/2202.03418
🧵
Prior works have made progress on robustness to spurious features but also have important weaknesses:
- They can't handle perfect/complete correlations
- They often need labeled data from the target distr. for hparam tuning
DivDis can address both challenges, using 2 stages:
1. The Diversify stage learns multiple functions that minimize training error but have differing predictions on unlabeled target data
2. The Disambiguate stage uses a few active queries to identify the correct function
Read 7 tweets
Oct 22, 2021
Large language models (LLMs) often make mistakes that are difficult to correct.

We study the problem of quickly editing these models:
Paper: arxiv.org/abs/2110.11309
Code: github.com/eric-mitchell/…

w/ @_eric_mitchell_, C. Lin, @ABosselut, @chrmanning

thread 🧵👇
We assume a pre-trained model & a dataset that covers many possible model edits

Then, we meta-train a model editor that predicts a model update that:
- edits the model
- otherwise keeps the model behavior the same

(2/4)
You can train model editors for massive models (e.g. GPT-J, T5-11B) in <1 day on a single GPU.

Edits with the resulting model editor are extremely fast, with edit success rate of 80-90%.
(3/4)
Read 4 tweets
Sep 22, 2021
RL methods so often learn from _scratch_. Can they leverage offline experience from previous tasks?

They can. And if they do, they will learn new tasks ~2x faster.

Paper: arxiv.org/abs/2109.09180
Website: sites.google.com/view/retain-ex…

Led by Annie Xie. 🧵👇(1/4) Lifelong Robotic Reinforcement Learning by Retaining Experie
Many prior transfer learning methods try to transfer weights, e.g through fine-tuning.

We consider whether we can also transfer past *experiences*, rather than throwing away the prior data.
(2/4) A diagram of the method, which restores replay buffers from
Retaining & filtering experiences performs *substantially* better than fine-tuning & other prior methods

It also outperforms learning from scratch, even when learning from scratch with 2x the data.
(3/4) Table of results, showing substantially better performance f
Read 4 tweets
Jul 20, 2021
Thrilled to share new work on AI for education: can we give detailed, high-quality feedback to students?

Post: ai.stanford.edu/blog/prototran…
NYT Coverage: nytimes.com/2021/07/20/tec…

A collab w. the amazing @mike_h_wu @chrispiech & co 🧵
2/ Student feedback is a fundamental problem in scaling education.

Providing good feedback is hard: existing approaches provide canned responses, cryptic error messages, or simply provide the answer.
3/ Providing feedback is also hard for ML: not a ton of data, teachers frequently change their assignments, and student solutions are open-ended and long-tailed.

Supervised learning doesn’t work. We weren’t sure if this problem can even be solved using ML.
Read 12 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(