We aim to enable robots and RL agents to remember information to solve long-horizon, partially observable tasks, but is simple memory retention sufficient?
In our AAMAS paper, "Memory Retention Is Not Enough to Master Memory Tasks in Reinforcement Learning," we explore this
1/5 2/5
Recently, many impressive studies have been published in the field of memory-RL, such as Stable Hadamard Memory (SHM), Fast and Forgetful Memory (FFM), and Gated Transformer-XL (GTrXL), which perform excellently at memorizing information.