Replay Buffer with Local Forgetting for Adapting to Local Environment Changes in Deep Model-Based Reinforcement Learning
One of the key behavioral characteristics used in neuroscience to determine whether the subject of study -- be it a rodent or a human -- exhibits model-based learning is effective adaptation to local changes in the environment, a particular form of adaptivity that is the focus of this work. In reinf...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | One of the key behavioral characteristics used in neuroscience to determine
whether the subject of study -- be it a rodent or a human -- exhibits
model-based learning is effective adaptation to local changes in the
environment, a particular form of adaptivity that is the focus of this work. In
reinforcement learning, however, recent work has shown that modern deep
model-based reinforcement-learning (MBRL) methods adapt poorly to local
environment changes. An explanation for this mismatch is that MBRL methods are
typically designed with sample-efficiency on a single task in mind and the
requirements for effective adaptation are substantially higher, both in terms
of the learned world model and the planning routine. One particularly
challenging requirement is that the learned world model has to be sufficiently
accurate throughout relevant parts of the state-space. This is challenging for
deep-learning-based world models due to catastrophic forgetting. And while a
replay buffer can mitigate the effects of catastrophic forgetting, the
traditional first-in-first-out replay buffer precludes effective adaptation due
to maintaining stale data. In this work, we show that a conceptually simple
variation of this traditional replay buffer is able to overcome this
limitation. By removing only samples from the buffer from the local
neighbourhood of the newly observed samples, deep world models can be built
that maintain their accuracy across the state-space, while also being able to
effectively adapt to local changes in the reward function. We demonstrate this
by applying our replay-buffer variation to a deep version of the classical Dyna
method, as well as to recent methods such as PlaNet and DreamerV2,
demonstrating that deep model-based methods can adapt effectively as well to
local changes in the environment. |
---|---|
DOI: | 10.48550/arxiv.2303.08690 |