Understanding Learned Reward Functions
In many real-world tasks, it is not possible to procedurally specify an RL agent's reward function. In such cases, a reward function must instead be learned from interacting with and observing humans. However, current techniques for reward learning may fail to produce reward functions which acc...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In many real-world tasks, it is not possible to procedurally specify an RL
agent's reward function. In such cases, a reward function must instead be
learned from interacting with and observing humans. However, current techniques
for reward learning may fail to produce reward functions which accurately
reflect user preferences. Absent significant advances in reward learning, it is
thus important to be able to audit learned reward functions to verify whether
they truly capture user preferences. In this paper, we investigate techniques
for interpreting learned reward functions. In particular, we apply saliency
methods to identify failure modes and predict the robustness of reward
functions. We find that learned reward functions often implement surprising
algorithms that rely on contingent aspects of the environment. We also discover
that existing interpretability techniques often attend to irrelevant changes in
reward output, suggesting that reward interpretability may need significantly
different methods from policy interpretability. |
---|---|
DOI: | 10.48550/arxiv.2012.05862 |