Data augmentation for efficient learning from parametric experts
We present a simple, yet powerful data-augmentation technique to enable data-efficient learning from parametric experts for reinforcement and imitation learning. We focus on what we call the policy cloning setting, in which we use online or offline queries of an expert or expert policy to inform the...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a simple, yet powerful data-augmentation technique to enable
data-efficient learning from parametric experts for reinforcement and imitation
learning. We focus on what we call the policy cloning setting, in which we use
online or offline queries of an expert or expert policy to inform the behavior
of a student policy. This setting arises naturally in a number of problems, for
instance as variants of behavior cloning, or as a component of other algorithms
such as DAGGER, policy distillation or KL-regularized RL. Our approach,
augmented policy cloning (APC), uses synthetic states to induce
feedback-sensitivity in a region around sampled trajectories, thus dramatically
reducing the environment interactions required for successful cloning of the
expert. We achieve highly data-efficient transfer of behavior from an expert to
a student policy for high-degrees-of-freedom control problems. We demonstrate
the benefit of our method in the context of several existing and widely used
algorithms that include policy cloning as a constituent part. Moreover, we
highlight the benefits of our approach in two practically relevant settings (a)
expert compression, i.e. transfer to a student with fewer parameters; and (b)
transfer from privileged experts, i.e. where the expert has a different
observation space than the student, usually including access to privileged
information. |
---|---|
DOI: | 10.48550/arxiv.2205.11448 |