My Authors
Read all threads
arxiv.org/abs/2009.01719
This paper (by @FelixHill84 et al) is really an "It's all coming together" moment for @DeepMind I feel.

Let me try to describe my takeaways from my first readthrough.

1/14
The paper tests several variants of a 3D environment that includes a number (usually 3) of objects. When the agent looks at an object, it also gets the name of the object as a natural language input.

2/14
At first, there is a "discovery phase" in which the agent can just look around and figure out its environment. It then is instructed to pick up one of the objects it saw. There are several other variants in the paper but this is the base task.

3/14
They propose a "Dual-Coding Episodic Memory" model to solve this problem, and it does so very well. The DCEM uses a kind of key-value storage that uses language embedding as the key and image embedding as the output. I like the idea a lot.

4/14
If the agent is trained with 3 objects in the room, its performance with 5 or 8 is sorta bad, but training it with 5 strangely seems enough to make it pretty good at 3, 5, and 8.

5/14
But even the base performance is seemingly better than 40-month infants, which seems both important and sort of funny. Are we sure humans are General Intelligence?

6/14
What I think surprised me the most is that the agent has no problem dealing with objects it never saw during training.

7/14
Yea, you and me both lol.

8/14
Intrinsic motivation/curiosity too? It's like DM got my Christmas list!

9/14
Here we can see how weighing the intrinsic curiosity of image versus language affects the model. Interestingly, too much curiosity for images leads to failure, probably because it goes looking for some interesting corners of the room instead of doing its task.

10/14
They also try alternative settings, including having the agent interact both with unfamiliar objects and an unfamiliar task (!)

11/14
They then also tried the agent in a totally different 3D engine/environment on a different task. The results might not look amazing (0.86 +- 0.67 lol?), but: "we applied the [...] architectures directly to this environment with no environment-specific tuning" (!)

12/14
While I like their DCEM model and think it has a lot of inherently appealing characteristics, the bitter lesson strikes again and Gated Transformer-XL was basically equivalent in performance (though slightly less sample efficient).

Press F in the chat.

13/14
In summary: I love this paper and what it tries to do. I think it may be overemphasizing the usefulness of DCEM at this very moment, but that doesn't take away from how cool and AGI-firealarm-y this is.

14/14
Missing some Tweet in this thread? You can try to force a refresh.

Keep Current with Connor Leahy

Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!