Learning latent state representation for speeding up exploration
2nd Exploration in Reinforcement Learning Workshop at the 36 th International Conference on Machine Learning, 2019 Exploration is an extremely challenging problem in reinforcement learning, especially in high dimensional state and action spaces and when only sparse rewards are available. Effective r...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 2nd Exploration in Reinforcement Learning Workshop at the 36 th
International Conference on Machine Learning, 2019 Exploration is an extremely challenging problem in reinforcement learning,
especially in high dimensional state and action spaces and when only sparse
rewards are available. Effective representations can indicate which components
of the state are task relevant and thus reduce the dimensionality of the space
to explore. In this work, we take a representation learning viewpoint on
exploration, utilizing prior experience to learn effective latent
representations, which can subsequently indicate which regions to explore.
Prior experience on separate but related tasks help learn representations of
the state which are effective at predicting instantaneous rewards. These
learned representations can then be used with an entropy-based exploration
method to effectively perform exploration in high dimensional spaces by
effectively lowering the dimensionality of the search space. We show the
benefits of this representation for meta-exploration in a simulated object
pushing environment. |
---|---|
DOI: | 10.48550/arxiv.1905.12621 |