Stochastic Latent Actor-Critic: Deep Reinforcement Learning with a Latent Variable Model
Deep reinforcement learning (RL) algorithms can use high-capacity deep networks to learn directly from image observations. However, these high-dimensional observation spaces present a number of challenges in practice, since the policy must now solve two problems: representation learning and task lea...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep reinforcement learning (RL) algorithms can use high-capacity deep
networks to learn directly from image observations. However, these
high-dimensional observation spaces present a number of challenges in practice,
since the policy must now solve two problems: representation learning and task
learning. In this work, we tackle these two problems separately, by explicitly
learning latent representations that can accelerate reinforcement learning from
images. We propose the stochastic latent actor-critic (SLAC) algorithm: a
sample-efficient and high-performing RL algorithm for learning policies for
complex continuous control tasks directly from high-dimensional image inputs.
SLAC provides a novel and principled approach for unifying stochastic
sequential models and RL into a single method, by learning a compact latent
representation and then performing RL in the model's learned latent space. Our
experimental evaluation demonstrates that our method outperforms both
model-free and model-based alternatives in terms of final performance and
sample efficiency, on a range of difficult image-based control tasks. Our code
and videos of our results are available at our website. |
---|---|
DOI: | 10.48550/arxiv.1907.00953 |