Roma Patel Profile picture
Apr 28, 2020 6 tweets 2 min read Read on X
this paper from @shaohua0116 on guiding RL agents with program counterparts of natural language instructions was one of my favourites at #ICLR2020. here is why i think it's exciting and quite different from existing work. 1/6
there's a large literature of #NLProc work on semantic parsing (converting language->executable meaning representations) for a variety of tasks. this is helpful e.g., for database operations, for goals/rewards for planners, to ground to predefined robotic actions etc. 2/6
apart from select works, a lot of the time, the programs are treated as static---their executions are pre-defined, are usually used once at some beginning/end-point (e.g. to produce a goal state for some RL algorithm/planner) and do not extend over time or with interactions. 3/6
this work uses programs that are more _functional_, involving control flows, conditionals, as well as nested subtasks. agents therefore perceive the environment and interact with it following the control flows---which is far more natural than previous RL/language setups. 4/6
this should allow better handling of temporal conditions and lifelong learning guided by program synthesis. moreover, lots of dependencies/signals in natural language that couldn't be captured by static goal-based programs, now have a better chance of being realised! 5/6
there's still a lot left to do on the language side here (e.g., this has rule-based program interpreters and doesn't do the language->program pipeline), but i'm pretty excited about this direction and future work that does! 6/6

• • •

Missing some Tweet in this thread? You can try to force a refresh
 

Keep Current with Roma Patel

Roma Patel Profile picture

Stay in touch and get notified when new unrolls are available from this author!

Read all threads

This Thread may be Removed Anytime!

PDF

Twitter may remove this content at anytime! Save it as PDF for later use!

Try unrolling a thread yourself!

how to unroll video
  1. Follow @ThreadReaderApp to mention us!

  2. From a Twitter thread mention us with a keyword "unroll"
@threadreaderapp unroll

Practice here first or read more on our help page!

More from @996roma

Oct 3, 2020
if you are applying to PhD programs in CS, this is for you! specifically, we've seen lots of opportunities tailored towards applicants from underrepresented groups, so several grad students at @browncs compiled a list that we hope is generally helpful cs.brown.edu/degrees/doctor… 1/6
this compilation of resources includes perspectives from PhD students (@kalpeshk2011 , @nelsonfliu), advice from faculty members (@ybisk, @adveisner), and an overview of some existing initiatives aimed towards mentoring underrepresented applicants applying to PhD programs. 2/6
if you belong to an underrepresented group in AI/ML our student-run applicant support program @BrownCSDept would love to hear from you! we are here to offer feedback on apps and advice, to the best of our capabilities, with anything related to your PhD application. 3/6
Read 7 tweets
Apr 27, 2020
here is work at #ICLR2020 by @FelixHill84 @AndrewLampinen @santoroAI + coauthors that nicely verifies (a subset of) the things put forth by several #NLProc position papers on language/grounding/meaning/form. some insights and reasons to read this paper: 1/5
this deals with richer 3D (rather than 2D) environments that allow systematic evaluation of phenomena against a richer variety of stimuli (here, only visual, but can extend to sensory information e.g., action effects, deformation, touch, sound + more realistic simulators). 2/5
unlike a lot of previous work, the tasks are not only navigation (whether over discrete / continuous spaces) but involve manipulation, positioning, gaze (through visual rays) which are far more complex motor activities. 3/5
Read 5 tweets
Apr 25, 2020
this paper on memory in intelligent systems from @aidanematzadeh, @seb_ruder, @DaniYogatama at #BAICS2020 was a fascinating read! baicsworkshop.github.io/pdf/BAICS_22.p…
"On the other hand, the ability to forget is a crucial part of the human memory system." this is true! forgetting inessential details and compressing past information is important to help form abstractions for intelligent systems.
"The separation of computation and storage is necessary to incorporate structural bias into AI systems". not many of our favourite neural networks have modular/multiple memory components. the authors suggest that this kind of framework might help avoid catastrophic forgetting!
Read 4 tweets

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just two indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3/month or $30/year) and get exclusive features!

Become Premium

Don't want to be a Premium member but still want to support us?

Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal

Or Donate anonymously using crypto!

Ethereum

0xfe58350B80634f60Fa6Dc149a72b4DFbc17D341E copy

Bitcoin

3ATGMxNzCUFzxpMCHL5sWSt4DVtS8UqXpi copy

Thank you for your support!

Follow Us!

:(