At #ICML today: why is generalization so hard in value-based RL? We show that the TD targets used in value-based RL evolve in a structured way, and that this encourages neural networks to ‘memorize’ the value function.
📺 icml.cc/virtual/2022/p…
📜 proceedings.mlr.press/v162/lyle22a.h…
TL;DR: reward functions in most benchmark MDPs don’t look much like the actual value function — in particular, the smooth* components of the value function tend to be missing!
*smooth ~= doesn't change much between adjacent states, e.g. a constant function.
Early TD targets tend to resemble the reward, and it can take many updates for reward information to propagate (see attached figure). Meanwhile, the deep RL agent is training its neural network to fit these non-smooth prediction targets, building in a bias towards *memorization*.
Later on in training, even if the smooth components of the value function are present in the targets, the network maintains this bias: an update to its prediction for one state exerts little influence on other randomly sampled states from its replay buffer.
Randomly initializing a new network and distilling it on the trained agent’s value function mitigates this bias, suggesting that it is a result of training on TD targets. Networks trained only with policy gradient losses also extrapolate more between states.
Overall, it’s not clear whether this bias towards memorization is necessarily bad, as it might help stabilize learning; however, it does clearly reduce the extent to which an agent can generalize its learned policy to states it hasn’t seen yet.
Want to learn more? Visit our poster today to find out more at Hall E #1018. Thanks to co-authors Mark Rowland (presenting the poster), along with @wwdabney, @yaringal, and Marta Kwiatkowska.
• • •
Missing some Tweet in this thread? You can try to
force a refresh