here is work at #ICLR2020 by @FelixHill84@AndrewLampinen@santoroAI + coauthors that nicely verifies (a subset of) the things put forth by several #NLProc position papers on language/grounding/meaning/form. some insights and reasons to read this paper: 1/5
this deals with richer 3D (rather than 2D) environments that allow systematic evaluation of phenomena against a richer variety of stimuli (here, only visual, but can extend to sensory information e.g., action effects, deformation, touch, sound + more realistic simulators). 2/5
unlike a lot of previous work, the tasks are not only navigation (whether over discrete / continuous spaces) but involve manipulation, positioning, gaze (through visual rays) which are far more complex motor activities. 3/5
useful insights are uncovered about agent perspective (egocentric vs. allocentric) and which allow more intelligent behaviour! other useful (albeit less surprising) insights are that degree of systematic generalisability increases with object/word experiences in training. 4/5
overall, this work from @DeepMind folks nicely portrays how richer, multimodal environments are _required_ for generalisation of intelligent agents. excited to see future work that extends to realistic environments + multiple kinds of sensory information alongside language. 5/5
• • •
Missing some Tweet in this thread? You can try to
force a refresh
if you are applying to PhD programs in CS, this is for you! specifically, we've seen lots of opportunities tailored towards applicants from underrepresented groups, so several grad students at @browncs compiled a list that we hope is generally helpful cs.brown.edu/degrees/doctor… 1/6
this compilation of resources includes perspectives from PhD students (@kalpeshk2011 , @nelsonfliu), advice from faculty members (@ybisk, @adveisner), and an overview of some existing initiatives aimed towards mentoring underrepresented applicants applying to PhD programs. 2/6
if you belong to an underrepresented group in AI/ML our student-run applicant support program @BrownCSDept would love to hear from you! we are here to offer feedback on apps and advice, to the best of our capabilities, with anything related to your PhD application. 3/6
this paper from @shaohua0116 on guiding RL agents with program counterparts of natural language instructions was one of my favourites at #ICLR2020. here is why i think it's exciting and quite different from existing work. 1/6
there's a large literature of #NLProc work on semantic parsing (converting language->executable meaning representations) for a variety of tasks. this is helpful e.g., for database operations, for goals/rewards for planners, to ground to predefined robotic actions etc. 2/6
apart from select works, a lot of the time, the programs are treated as static---their executions are pre-defined, are usually used once at some beginning/end-point (e.g. to produce a goal state for some RL algorithm/planner) and do not extend over time or with interactions. 3/6
"On the other hand, the ability to forget is a crucial part of the human memory system." this is true! forgetting inessential details and compressing past information is important to help form abstractions for intelligent systems.
"The separation of computation and storage is necessary to incorporate structural bias into AI systems". not many of our favourite neural networks have modular/multiple memory components. the authors suggest that this kind of framework might help avoid catastrophic forgetting!