Plan Your Target and Learn Your Skills: Transferable State-Only Imitation Learning via Decoupled Policy Optimization
Recent progress in state-only imitation learning extends the scope of applicability of imitation learning to real-world settings by relieving the need for observing expert actions. However, existing solutions only learn to extract a state-to-action mapping policy from the data, without considering h...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent progress in state-only imitation learning extends the scope of
applicability of imitation learning to real-world settings by relieving the
need for observing expert actions. However, existing solutions only learn to
extract a state-to-action mapping policy from the data, without considering how
the expert plans to the target. This hinders the ability to leverage
demonstrations and limits the flexibility of the policy. In this paper, we
introduce Decoupled Policy Optimization (DePO), which explicitly decouples the
policy as a high-level state planner and an inverse dynamics model. With
embedded decoupled policy gradient and generative adversarial training, DePO
enables knowledge transfer to different action spaces or state transition
dynamics, and can generalize the planner to out-of-demonstration state regions.
Our in-depth experimental analysis shows the effectiveness of DePO on learning
a generalized target state planner while achieving the best imitation
performance. We demonstrate the appealing usage of DePO for transferring across
different tasks by pre-training, and the potential for co-training agents with
various skills. |
---|---|
DOI: | 10.48550/arxiv.2203.02214 |