Revisiting Gaussian mixture critics in off-policy reinforcement learning: a sample-based approach
Actor-critic algorithms that make use of distributional policy evaluation have frequently been shown to outperform their non-distributional counterparts on many challenging control tasks. Examples of this behavior include the D4PG and DMPO algorithms as compared to DDPG and MPO, respectively [Barth-...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Actor-critic algorithms that make use of distributional policy evaluation
have frequently been shown to outperform their non-distributional counterparts
on many challenging control tasks. Examples of this behavior include the D4PG
and DMPO algorithms as compared to DDPG and MPO, respectively [Barth-Maron et
al., 2018; Hoffman et al., 2020]. However, both agents rely on the C51 critic
for value estimation.One major drawback of the C51 approach is its requirement
of prior knowledge about the minimum andmaximum values a policy can attain as
well as the number of bins used, which fixes the resolution ofthe
distributional estimate. While the DeepMind control suite of tasks utilizes
standardized rewards and episode lengths, thus enabling the entire suite to be
solved with a single setting of these hyperparameters, this is often not the
case. This paper revisits a natural alternative that removes this requirement,
namelya mixture of Gaussians, and a simple sample-based loss function to train
it in an off-policy regime. We empirically evaluate its performance on a broad
range of continuous control tasks and demonstrate that it eliminates the need
for these distributional hyperparameters and achieves state-of-the-art
performance on a variety of challenging tasks (e.g. the humanoid, dog,
quadruped, and manipulator domains). Finallywe provide an implementation in the
Acme agent repository. |
---|---|
DOI: | 10.48550/arxiv.2204.10256 |