Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution
As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbati...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | As deep reinforcement learning (RL) is applied to more tasks, there is a need
to visualize and understand the behavior of learned agents. Saliency maps
explain agent behavior by highlighting the features of the input state that are
most relevant for the agent in taking an action. Existing perturbation-based
approaches to compute saliency often highlight regions of the input that are
not relevant to the action taken by the agent. Our proposed approach, SARFA
(Specific and Relevant Feature Attribution), generates more focused saliency
maps by balancing two aspects (specificity and relevance) that capture
different desiderata of saliency. The first captures the impact of perturbation
on the relative expected reward of the action to be explained. The second
downweighs irrelevant features that alter the relative expected rewards of
actions other than the action to be explained. We compare SARFA with existing
approaches on agents trained to play board games (Chess and Go) and Atari games
(Breakout, Pong and Space Invaders). We show through illustrative examples
(Chess, Atari, Go), human studies (Chess), and automated evaluation methods
(Chess) that SARFA generates saliency maps that are more interpretable for
humans than existing approaches. For the code release and demo videos, see
https://nikaashpuri.github.io/sarfa-saliency/. |
---|---|
DOI: | 10.48550/arxiv.1912.12191 |