Deconfounding Imitation Learning with Variational Inference
Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent. This is because partial observability gives rise to hidden confounders in the causal graph. In previous work, to work around the confounding problem, policies have been trained...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Standard imitation learning can fail when the expert demonstrators have
different sensory inputs than the imitating agent. This is because partial
observability gives rise to hidden confounders in the causal graph. In previous
work, to work around the confounding problem, policies have been trained using
query access to the expert's policy or inverse reinforcement learning (IRL).
However, both approaches have drawbacks as the expert's policy may not be
available and IRL can be unstable in practice. Instead, we propose to train a
variational inference model to infer the expert's latent information and use it
to train a latent-conditional policy. We prove that using this method, under
strong assumptions, the identification of the correct imitation learning policy
is theoretically possible from expert demonstrations alone. In practice, we
focus on a setting with less strong assumptions where we use exploration data
for learning the inference model. We show in theory and practice that this
algorithm converges to the correct interventional policy, solves the
confounding issue, and can under certain assumptions achieve an asymptotically
optimal imitation performance. |
---|---|
DOI: | 10.48550/arxiv.2211.02667 |