Roma Patel Profile picture
currently cs phd student @BrownUniversity. language & rl & multi-agent rl & interpretability. intern @googleai '19, @deepmind '20. (she/her)

Apr 27, 2020, 5 tweets

here is work at #ICLR2020 by @FelixHill84 @AndrewLampinen @santoroAI + coauthors that nicely verifies (a subset of) the things put forth by several #NLProc position papers on language/grounding/meaning/form. some insights and reasons to read this paper: 1/5

this deals with richer 3D (rather than 2D) environments that allow systematic evaluation of phenomena against a richer variety of stimuli (here, only visual, but can extend to sensory information e.g., action effects, deformation, touch, sound + more realistic simulators). 2/5

unlike a lot of previous work, the tasks are not only navigation (whether over discrete / continuous spaces) but involve manipulation, positioning, gaze (through visual rays) which are far more complex motor activities. 3/5

useful insights are uncovered about agent perspective (egocentric vs. allocentric) and which allow more intelligent behaviour! other useful (albeit less surprising) insights are that degree of systematic generalisability increases with object/word experiences in training. 4/5

overall, this work from @DeepMind folks nicely portrays how richer, multimodal environments are _required_ for generalisation of intelligent agents. excited to see future work that extends to realistic environments + multiple kinds of sensory information alongside language. 5/5

Share this Scrolly Tale with your friends.

A Scrolly Tale is a new way to read Twitter threads with a more visually immersive experience.
Discover more beautiful Scrolly Tales like this.

Keep scrolling