Roma Patel Profile picture
Apr 27, 2020 5 tweets 3 min read Read on X
here is work at #ICLR2020 by @FelixHill84 @AndrewLampinen @santoroAI + coauthors that nicely verifies (a subset of) the things put forth by several #NLProc position papers on language/grounding/meaning/form. some insights and reasons to read this paper: 1/5
this deals with richer 3D (rather than 2D) environments that allow systematic evaluation of phenomena against a richer variety of stimuli (here, only visual, but can extend to sensory information e.g., action effects, deformation, touch, sound + more realistic simulators). 2/5
unlike a lot of previous work, the tasks are not only navigation (whether over discrete / continuous spaces) but involve manipulation, positioning, gaze (through visual rays) which are far more complex motor activities. 3/5
useful insights are uncovered about agent perspective (egocentric vs. allocentric) and which allow more intelligent behaviour! other useful (albeit less surprising) insights are that degree of systematic generalisability increases with object/word experiences in training. 4/5
overall, this work from @DeepMind folks nicely portrays how richer, multimodal environments are _required_ for generalisation of intelligent agents. excited to see future work that extends to realistic environments + multiple kinds of sensory information alongside language. 5/5

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Roma Patel

Roma Patel Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @996roma

Oct 3, 2020
if you are applying to PhD programs in CS, this is for you! specifically, we've seen lots of opportunities tailored towards applicants from underrepresented groups, so several grad students at @browncs compiled a list that we hope is generally helpful cs.brown.edu/degrees/doctor… 1/6
this compilation of resources includes perspectives from PhD students (@kalpeshk2011 , @nelsonfliu), advice from faculty members (@ybisk, @adveisner), and an overview of some existing initiatives aimed towards mentoring underrepresented applicants applying to PhD programs. 2/6
if you belong to an underrepresented group in AI/ML our student-run applicant support program @BrownCSDept would love to hear from you! we are here to offer feedback on apps and advice, to the best of our capabilities, with anything related to your PhD application. 3/6
Read 7 tweets
Apr 28, 2020
this paper from @shaohua0116 on guiding RL agents with program counterparts of natural language instructions was one of my favourites at #ICLR2020. here is why i think it's exciting and quite different from existing work. 1/6
there's a large literature of #NLProc work on semantic parsing (converting language->executable meaning representations) for a variety of tasks. this is helpful e.g., for database operations, for goals/rewards for planners, to ground to predefined robotic actions etc. 2/6
apart from select works, a lot of the time, the programs are treated as static---their executions are pre-defined, are usually used once at some beginning/end-point (e.g. to produce a goal state for some RL algorithm/planner) and do not extend over time or with interactions. 3/6
Read 6 tweets
Apr 25, 2020
this paper on memory in intelligent systems from @aidanematzadeh, @seb_ruder, @DaniYogatama at #BAICS2020 was a fascinating read! baicsworkshop.github.io/pdf/BAICS_22.p…
"On the other hand, the ability to forget is a crucial part of the human memory system." this is true! forgetting inessential details and compressing past information is important to help form abstractions for intelligent systems.
"The separation of computation and storage is necessary to incorporate structural bias into AI systems". not many of our favourite neural networks have modular/multiple memory components. the authors suggest that this kind of framework might help avoid catastrophic forgetting!
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(