Learning to Reach Goals via Iterated Supervised Learning
Current reinforcement learning (RL) algorithms can be brittle and difficult to use, especially when learning goal-reaching behaviors from sparse rewards. Although supervised imitation learning provides a simple and stable alternative, it requires access to demonstrations from a human supervisor. In...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Current reinforcement learning (RL) algorithms can be brittle and difficult
to use, especially when learning goal-reaching behaviors from sparse rewards.
Although supervised imitation learning provides a simple and stable
alternative, it requires access to demonstrations from a human supervisor. In
this paper, we study RL algorithms that use imitation learning to acquire goal
reaching policies from scratch, without the need for expert demonstrations or a
value function. In lieu of demonstrations, we leverage the property that any
trajectory is a successful demonstration for reaching the final state in that
same trajectory. We propose a simple algorithm in which an agent continually
relabels and imitates the trajectories it generates to progressively learn
goal-reaching behaviors from scratch. Each iteration, the agent collects new
trajectories using the latest policy, and maximizes the likelihood of the
actions along these trajectories under the goal that was actually reached, so
as to improve the policy. We formally show that this iterated supervised
learning procedure optimizes a bound on the RL objective, derive performance
bounds of the learned policy, and empirically demonstrate improved
goal-reaching performance and robustness over current RL algorithms in several
benchmark tasks. |
---|---|
DOI: | 10.48550/arxiv.1912.06088 |