Transient Non-Stationarity and Generalisation in Deep Reinforcement Learning
Non-stationarity can arise in Reinforcement Learning (RL) even in stationary environments. For example, most RL algorithms collect new data throughout training, using a non-stationary behaviour policy. Due to the transience of this non-stationarity, it is often not explicitly addressed in deep RL an...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Non-stationarity can arise in Reinforcement Learning (RL) even in stationary
environments. For example, most RL algorithms collect new data throughout
training, using a non-stationary behaviour policy. Due to the transience of
this non-stationarity, it is often not explicitly addressed in deep RL and a
single neural network is continually updated. However, we find evidence that
neural networks exhibit a memory effect where these transient
non-stationarities can permanently impact the latent representation and
adversely affect generalisation performance. Consequently, to improve
generalisation of deep RL agents, we propose Iterated Relearning (ITER). ITER
augments standard RL training by repeated knowledge transfer of the current
policy into a freshly initialised network, which thereby experiences less
non-stationarity during training. Experimentally, we show that ITER improves
performance on the challenging generalisation benchmarks ProcGen and Multiroom. |
---|---|
DOI: | 10.48550/arxiv.2006.05826 |