Teaching Inverse Reinforcement Learners via Features and Demonstrations
Learning near-optimal behaviour from an expert's demonstrations typically relies on the assumption that the learner knows the features that the true reward function depends on. In this paper, we study the problem of learning from demonstrations in the setting where this is not the case, i.e., w...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Learning near-optimal behaviour from an expert's demonstrations typically
relies on the assumption that the learner knows the features that the true
reward function depends on. In this paper, we study the problem of learning
from demonstrations in the setting where this is not the case, i.e., where
there is a mismatch between the worldviews of the learner and the expert. We
introduce a natural quantity, the teaching risk, which measures the potential
suboptimality of policies that look optimal to the learner in this setting. We
show that bounds on the teaching risk guarantee that the learner is able to
find a near-optimal policy using standard algorithms based on inverse
reinforcement learning. Based on these findings, we suggest a teaching scheme
in which the expert can decrease the teaching risk by updating the learner's
worldview, and thus ultimately enable her to find a near-optimal policy. |
---|---|
DOI: | 10.48550/arxiv.1810.08926 |