Deep PQR: Solving Inverse Reinforcement Learning using Anchor Actions
In Proceedings of the 37th ICML, Vienna, Austria, PMLR 119, pp. 3431-3441, 2020 We propose a reward function estimation framework for inverse reinforcement learning with deep energy-based policies. We name our method PQR, as it sequentially estimates the Policy, the $Q$-function, and the Reward func...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In Proceedings of the 37th ICML, Vienna, Austria, PMLR 119, pp.
3431-3441, 2020 We propose a reward function estimation framework for inverse reinforcement
learning with deep energy-based policies. We name our method PQR, as it
sequentially estimates the Policy, the $Q$-function, and the Reward function by
deep learning. PQR does not assume that the reward solely depends on the state,
instead it allows for a dependency on the choice of action. Moreover, PQR
allows for stochastic state transitions. To accomplish this, we assume the
existence of one anchor action whose reward is known, typically the action of
doing nothing, yielding no reward. We present both estimators and algorithms
for the PQR method. When the environment transition is known, we prove that the
PQR reward estimator uniquely recovers the true reward. With unknown
transitions, we bound the estimation error of PQR. Finally, the performance of
PQR is demonstrated by synthetic and real-world datasets. |
---|---|
DOI: | 10.48550/arxiv.2007.07443 |