Efficient Imitation Without Demonstrations via Value-Penalized Auxiliary Control from Examples
Learning from examples of success is an ap pealing approach to reinforcement learning but it presents a challenging exploration problem, especially for complex or long-horizon tasks. This work introduces value-penalized auxiliary control from examples (VPACE), an algorithm that significantly improve...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning from examples of success is an ap pealing approach to reinforcement
learning but it presents a challenging exploration problem, especially for
complex or long-horizon tasks. This work introduces value-penalized auxiliary
control from examples (VPACE), an algorithm that significantly improves
exploration in example-based control by adding examples of simple auxiliary
tasks. For instance, a manipulation task may have auxiliary examples of an
object being reached for, grasped, or lifted. We show that the na\"{i}ve
application of scheduled auxiliary control to example-based learning can lead
to value overestimation and poor performance. We resolve the problem with an
above-success-level value penalty. Across both simulated and real robotic
environments, we show that our approach substantially improves learning
efficiency for challenging tasks, while maintaining bounded value estimates. We
compare with existing approaches to example-based learning, inverse
reinforcement learning, and an exploration bonus. Preliminary results also
suggest that VPACE may learn more efficiently than the more common approaches
of using full trajectories or true sparse rewards. Videos, code, and datasets:
https://papers.starslab.ca/vpace. |
---|---|
DOI: | 10.48550/arxiv.2407.03311 |