Planning to Explore via Self-Supervised World Models
Reinforcement learning allows solving complex tasks, however, the learning tends to be task-specific and the sample efficiency remains a challenge. We present Plan2Explore, a self-supervised reinforcement learning agent that tackles both these challenges through a new approach to self-supervised exp...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reinforcement learning allows solving complex tasks, however, the learning
tends to be task-specific and the sample efficiency remains a challenge. We
present Plan2Explore, a self-supervised reinforcement learning agent that
tackles both these challenges through a new approach to self-supervised
exploration and fast adaptation to new tasks, which need not be known during
exploration. During exploration, unlike prior methods which retrospectively
compute the novelty of observations after the agent has already reached them,
our agent acts efficiently by leveraging planning to seek out expected future
novelty. After exploration, the agent quickly adapts to multiple downstream
tasks in a zero or a few-shot manner. We evaluate on challenging control tasks
from high-dimensional image inputs. Without any training supervision or
task-specific interaction, Plan2Explore outperforms prior self-supervised
exploration methods, and in fact, almost matches the performances oracle which
has access to rewards. Videos and code at
https://ramanans1.github.io/plan2explore/ |
---|---|
DOI: | 10.48550/arxiv.2005.05960 |