Goal-Space Planning with Subgoal Models
This paper investigates a new approach to model-based reinforcement learning using background planning: mixing (approximate) dynamic programming updates and model-free updates, similar to the Dyna architecture. Background planning with learned models is often worse than model-free alternatives, such...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper investigates a new approach to model-based reinforcement learning
using background planning: mixing (approximate) dynamic programming updates and
model-free updates, similar to the Dyna architecture. Background planning with
learned models is often worse than model-free alternatives, such as Double DQN,
even though the former uses significantly more memory and computation. The
fundamental problem is that learned models can be inaccurate and often generate
invalid states, especially when iterated many steps. In this paper, we avoid
this limitation by constraining background planning to a set of (abstract)
subgoals and learning only local, subgoal-conditioned models. This goal-space
planning (GSP) approach is more computationally efficient, naturally
incorporates temporal abstraction for faster long-horizon planning and avoids
learning the transition dynamics entirely. We show that our GSP algorithm can
propagate value from an abstract space in a manner that helps a variety of base
learners learn significantly faster in different domains. |
---|---|
DOI: | 10.48550/arxiv.2206.02902 |