Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation

We present a novel method for reliably explaining the predictions of neural networks. We consider an explanation reliable if it identifies input features relevant to the model output by considering the input and the neighboring data points. Our method is built on top of the assumption of smooth land...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lim, Dohun, Lee, Hyeonseok, Kim, Sungchan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lim, Dohun
Lee, Hyeonseok
Kim, Sungchan
description We present a novel method for reliably explaining the predictions of neural networks. We consider an explanation reliable if it identifies input features relevant to the model output by considering the input and the neighboring data points. Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction: locally consistent loss and gradient profile. A theoretical analysis established in this study suggests that those locally smooth model explanations are learned using a batch of noisy copies of the input with the L1 regularization for a saliency map. Extensive experiments support the analysis results, revealing that the proposed saliency maps retrieve the original classes of adversarial examples crafted against both naturally and adversarially trained models, significantly outperforming previous methods. We further demonstrated that such good performance results from the learning capability of this method to identify input features that are truly relevant to the model output of the input and the neighboring data points, fulfilling the requirements of a reliable explanation.
doi_str_mv 10.48550/arxiv.2103.14332
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2103_14332</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2103_14332</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-444073ecb126f8734fa6cb9def611659379d788f73f986377275fcbc6ed1a51f3</originalsourceid><addsrcrecordid>eNo1j71OwzAURr0woMIDMOEXSIhjx07YoCpQqfwIyhzdONdg4caR45b27UsCTGf49B3pEHLBslSURZFdQdjbXZqzjKdMcJ6fkuF2a11ruw_6is5C45Au9r2DDqL13UC9oe9d-J-ecBvA_SB--_A1XNOV1-Dcgb5tvI-fo-YFw9CjjnaH4_nRt-jososY-oBxsp6REwNuwPM_zsj6brGePySr5_vl_GaVgFR5IoTIFEfdsFyaUnFhQOqmatFIxmRRcVW1qiyN4qYqJVcqV4XRjZbYMiiY4TNy-audqus-2A2EQz3W11M9PwJ0AFb2</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation</title><source>arXiv.org</source><creator>Lim, Dohun ; Lee, Hyeonseok ; Kim, Sungchan</creator><creatorcontrib>Lim, Dohun ; Lee, Hyeonseok ; Kim, Sungchan</creatorcontrib><description>We present a novel method for reliably explaining the predictions of neural networks. We consider an explanation reliable if it identifies input features relevant to the model output by considering the input and the neighboring data points. Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction: locally consistent loss and gradient profile. A theoretical analysis established in this study suggests that those locally smooth model explanations are learned using a batch of noisy copies of the input with the L1 regularization for a saliency map. Extensive experiments support the analysis results, revealing that the proposed saliency maps retrieve the original classes of adversarial examples crafted against both naturally and adversarially trained models, significantly outperforming previous methods. We further demonstrated that such good performance results from the learning capability of this method to identify input features that are truly relevant to the model output of the input and the neighboring data points, fulfilling the requirements of a reliable explanation.</description><identifier>DOI: 10.48550/arxiv.2103.14332</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2021-03</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2103.14332$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2103.14332$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lim, Dohun</creatorcontrib><creatorcontrib>Lee, Hyeonseok</creatorcontrib><creatorcontrib>Kim, Sungchan</creatorcontrib><title>Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation</title><description>We present a novel method for reliably explaining the predictions of neural networks. We consider an explanation reliable if it identifies input features relevant to the model output by considering the input and the neighboring data points. Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction: locally consistent loss and gradient profile. A theoretical analysis established in this study suggests that those locally smooth model explanations are learned using a batch of noisy copies of the input with the L1 regularization for a saliency map. Extensive experiments support the analysis results, revealing that the proposed saliency maps retrieve the original classes of adversarial examples crafted against both naturally and adversarially trained models, significantly outperforming previous methods. We further demonstrated that such good performance results from the learning capability of this method to identify input features that are truly relevant to the model output of the input and the neighboring data points, fulfilling the requirements of a reliable explanation.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j71OwzAURr0woMIDMOEXSIhjx07YoCpQqfwIyhzdONdg4caR45b27UsCTGf49B3pEHLBslSURZFdQdjbXZqzjKdMcJ6fkuF2a11ruw_6is5C45Au9r2DDqL13UC9oe9d-J-ecBvA_SB--_A1XNOV1-Dcgb5tvI-fo-YFw9CjjnaH4_nRt-jososY-oBxsp6REwNuwPM_zsj6brGePySr5_vl_GaVgFR5IoTIFEfdsFyaUnFhQOqmatFIxmRRcVW1qiyN4qYqJVcqV4XRjZbYMiiY4TNy-audqus-2A2EQz3W11M9PwJ0AFb2</recordid><startdate>20210326</startdate><enddate>20210326</enddate><creator>Lim, Dohun</creator><creator>Lee, Hyeonseok</creator><creator>Kim, Sungchan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210326</creationdate><title>Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation</title><author>Lim, Dohun ; Lee, Hyeonseok ; Kim, Sungchan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-444073ecb126f8734fa6cb9def611659379d788f73f986377275fcbc6ed1a51f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lim, Dohun</creatorcontrib><creatorcontrib>Lee, Hyeonseok</creatorcontrib><creatorcontrib>Kim, Sungchan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lim, Dohun</au><au>Lee, Hyeonseok</au><au>Kim, Sungchan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation</atitle><date>2021-03-26</date><risdate>2021</risdate><abstract>We present a novel method for reliably explaining the predictions of neural networks. We consider an explanation reliable if it identifies input features relevant to the model output by considering the input and the neighboring data points. Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction: locally consistent loss and gradient profile. A theoretical analysis established in this study suggests that those locally smooth model explanations are learned using a batch of noisy copies of the input with the L1 regularization for a saliency map. Extensive experiments support the analysis results, revealing that the proposed saliency maps retrieve the original classes of adversarial examples crafted against both naturally and adversarially trained models, significantly outperforming previous methods. We further demonstrated that such good performance results from the learning capability of this method to identify input features that are truly relevant to the model output of the input and the neighboring data points, fulfilling the requirements of a reliable explanation.</abstract><doi>10.48550/arxiv.2103.14332</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2103.14332
ispartof
issn
language eng
recordid cdi_arxiv_primary_2103_14332
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title Building Reliable Explanations of Unreliable Neural Networks: Locally Smoothing Perspective of Model Interpretation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T18%3A01%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Building%20Reliable%20Explanations%20of%20Unreliable%20Neural%20Networks:%20Locally%20Smoothing%20Perspective%20of%20Model%20Interpretation&rft.au=Lim,%20Dohun&rft.date=2021-03-26&rft_id=info:doi/10.48550/arxiv.2103.14332&rft_dat=%3Carxiv_GOX%3E2103_14332%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true