Model-Free Generative Replay for Lifelong Reinforcement Learning: Application to Starcraft-2
One approach to meet the challenges of deep lifelong reinforcement learning (LRL) is careful management of the agent's learning experiences, to learn (without forgetting) and build internal meta-models (of the tasks, environments, agents, and world). Generative replay (GR) is a biologically ins...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One approach to meet the challenges of deep lifelong reinforcement learning
(LRL) is careful management of the agent's learning experiences, to learn
(without forgetting) and build internal meta-models (of the tasks,
environments, agents, and world). Generative replay (GR) is a biologically
inspired replay mechanism that augments learning experiences with self-labelled
examples drawn from an internal generative model that is updated over time. We
present a version of GR for LRL that satisfies two desiderata: (a)
Introspective density modelling of the latent representations of policies
learned using deep RL, and (b) Model-free end-to-end learning. In this paper,
we study three deep learning architectures for model-free GR, starting from a
na\"ive GR and adding ingredients to achieve (a) and (b). We evaluate our
proposed algorithms on three different scenarios comprising tasks from the
Starcraft-2 and Minigrid domains. We report several key findings showing the
impact of the design choices on quantitative metrics that include transfer
learning, generalization to unseen tasks, fast adaptation after task change,
performance wrt task expert, and catastrophic forgetting. We observe that our
GR prevents drift in the features-to-action mapping from the latent vector
space of a deep RL agent. We also show improvements in established lifelong
learning metrics. We find that a small random replay buffer significantly
increases the stability of training. Overall, we find that "hidden replay" (a
well-known architecture for class-incremental classification) is the most
promising approach that pushes the state-of-the-art in GR for LRL and observe
that the architecture of the sleep model might be more important for improving
performance than the types of replay used. Our experiments required only 6% of
training samples to achieve 80-90% of expert performance in most Starcraft-2
scenarios. |
---|---|
DOI: | 10.48550/arxiv.2208.05056 |