Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution

As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Puri, Nikaash, Verma, Sukriti, Gupta, Piyush, Kayastha, Dhruv, Deshmukh, Shripad, Krishnamurthy, Balaji, Singh, Sameer
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Puri, Nikaash
Verma, Sukriti
Gupta, Piyush
Kayastha, Dhruv
Deshmukh, Shripad
Krishnamurthy, Balaji
Singh, Sameer
description As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see https://nikaashpuri.github.io/sarfa-saliency/.
doi_str_mv 10.48550/arxiv.1912.12191
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1912_12191</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1912_12191</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-37ae5fd1265d8c839edcdc2c99f714dd5f0385a448fd6726a99768e68327647a3</originalsourceid><addsrcrecordid>eNotj8tKxDAYhbNxIaMP4Mq8QGuTtLm4K8OMCjMIOrNwY_nNZQjUtKRpGd_ednT1weGcAx9Cd6TIS1lVxQPEs59yogjNCZ1xjT43574FH_BHN0a87yb7iI_B2DgkCMaHE65PNiRc6-S7MODjsGTvvdXeeY3nDn6zrZ1g7mwtpDFaXKcU_de4DG7QlYN2sLf_XKHDdnNYP2e716eXdb3LgAuSMQG2coZQXhmpJVPWaKOpVsoJUhpTuYLJCspSOsMF5aCU4NJyyajgpQC2Qvd_txfBpo_-G-JPs4g2F1H2Cxp2TnY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution</title><source>arXiv.org</source><creator>Puri, Nikaash ; Verma, Sukriti ; Gupta, Piyush ; Kayastha, Dhruv ; Deshmukh, Shripad ; Krishnamurthy, Balaji ; Singh, Sameer</creator><creatorcontrib>Puri, Nikaash ; Verma, Sukriti ; Gupta, Piyush ; Kayastha, Dhruv ; Deshmukh, Shripad ; Krishnamurthy, Balaji ; Singh, Sameer</creatorcontrib><description>As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see https://nikaashpuri.github.io/sarfa-saliency/.</description><identifier>DOI: 10.48550/arxiv.1912.12191</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2019-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1912.12191$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1912.12191$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Puri, Nikaash</creatorcontrib><creatorcontrib>Verma, Sukriti</creatorcontrib><creatorcontrib>Gupta, Piyush</creatorcontrib><creatorcontrib>Kayastha, Dhruv</creatorcontrib><creatorcontrib>Deshmukh, Shripad</creatorcontrib><creatorcontrib>Krishnamurthy, Balaji</creatorcontrib><creatorcontrib>Singh, Sameer</creatorcontrib><title>Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution</title><description>As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see https://nikaashpuri.github.io/sarfa-saliency/.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKxDAYhbNxIaMP4Mq8QGuTtLm4K8OMCjMIOrNwY_nNZQjUtKRpGd_ednT1weGcAx9Cd6TIS1lVxQPEs59yogjNCZ1xjT43574FH_BHN0a87yb7iI_B2DgkCMaHE65PNiRc6-S7MODjsGTvvdXeeY3nDn6zrZ1g7mwtpDFaXKcU_de4DG7QlYN2sLf_XKHDdnNYP2e716eXdb3LgAuSMQG2coZQXhmpJVPWaKOpVsoJUhpTuYLJCspSOsMF5aCU4NJyyajgpQC2Qvd_txfBpo_-G-JPs4g2F1H2Cxp2TnY</recordid><startdate>20191223</startdate><enddate>20191223</enddate><creator>Puri, Nikaash</creator><creator>Verma, Sukriti</creator><creator>Gupta, Piyush</creator><creator>Kayastha, Dhruv</creator><creator>Deshmukh, Shripad</creator><creator>Krishnamurthy, Balaji</creator><creator>Singh, Sameer</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20191223</creationdate><title>Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution</title><author>Puri, Nikaash ; Verma, Sukriti ; Gupta, Piyush ; Kayastha, Dhruv ; Deshmukh, Shripad ; Krishnamurthy, Balaji ; Singh, Sameer</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-37ae5fd1265d8c839edcdc2c99f714dd5f0385a448fd6726a99768e68327647a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Puri, Nikaash</creatorcontrib><creatorcontrib>Verma, Sukriti</creatorcontrib><creatorcontrib>Gupta, Piyush</creatorcontrib><creatorcontrib>Kayastha, Dhruv</creatorcontrib><creatorcontrib>Deshmukh, Shripad</creatorcontrib><creatorcontrib>Krishnamurthy, Balaji</creatorcontrib><creatorcontrib>Singh, Sameer</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Puri, Nikaash</au><au>Verma, Sukriti</au><au>Gupta, Piyush</au><au>Kayastha, Dhruv</au><au>Deshmukh, Shripad</au><au>Krishnamurthy, Balaji</au><au>Singh, Sameer</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution</atitle><date>2019-12-23</date><risdate>2019</risdate><abstract>As deep reinforcement learning (RL) is applied to more tasks, there is a need to visualize and understand the behavior of learned agents. Saliency maps explain agent behavior by highlighting the features of the input state that are most relevant for the agent in taking an action. Existing perturbation-based approaches to compute saliency often highlight regions of the input that are not relevant to the action taken by the agent. Our proposed approach, SARFA (Specific and Relevant Feature Attribution), generates more focused saliency maps by balancing two aspects (specificity and relevance) that capture different desiderata of saliency. The first captures the impact of perturbation on the relative expected reward of the action to be explained. The second downweighs irrelevant features that alter the relative expected rewards of actions other than the action to be explained. We compare SARFA with existing approaches on agents trained to play board games (Chess and Go) and Atari games (Breakout, Pong and Space Invaders). We show through illustrative examples (Chess, Atari, Go), human studies (Chess), and automated evaluation methods (Chess) that SARFA generates saliency maps that are more interpretable for humans than existing approaches. For the code release and demo videos, see https://nikaashpuri.github.io/sarfa-saliency/.</abstract><doi>10.48550/arxiv.1912.12191</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1912.12191
ispartof
issn
language eng
recordid cdi_arxiv_primary_1912_12191
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T15%3A41%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explain%20Your%20Move:%20Understanding%20Agent%20Actions%20Using%20Specific%20and%20Relevant%20Feature%20Attribution&rft.au=Puri,%20Nikaash&rft.date=2019-12-23&rft_id=info:doi/10.48550/arxiv.1912.12191&rft_dat=%3Carxiv_GOX%3E1912_12191%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true