Extensive evaluation of image classifiers’ interpretations
Saliency maps are input-resolution matrices used for visualizing local interpretations of image classifiers. Their pixel values reflect the importance of corresponding image locations for the model’s decision. Despite numerous proposals on how to obtain such maps, their evaluation remains an open qu...
Gespeichert in:
Veröffentlicht in: | Neural computing & applications 2024-11, Vol.36 (33), p.20787-20805 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 20805 |
---|---|
container_issue | 33 |
container_start_page | 20787 |
container_title | Neural computing & applications |
container_volume | 36 |
creator | Poštić, Suraja Subašić, Marko |
description | Saliency maps are input-resolution matrices used for visualizing local interpretations of image classifiers. Their pixel values reflect the importance of corresponding image locations for the model’s decision. Despite numerous proposals on how to obtain such maps, their evaluation remains an open question. This paper presents a carefully designed experimental procedure along with a set of quantitative interpretation evaluation metrics that rely solely on the original model behavior. Previously noticed evaluation biases have been attenuated by separating locations with high and low values, considering the full saliency map resolution, and using classifiers with diverse accuracies and all the classes in the dataset. We used the proposed evaluation metrics to compare and analyze seven well-known interpretation methods. Our experiments confirm the importance of object background as well as negative saliency map pixels, and we show that the scale of their impact on the model is comparable to that of positive ones. We also demonstrate that a good class score interpretation does not necessarily imply a good probability interpretation. DeepLIFT and LRP-
ϵ
methods proved most successful altogether, while Grad-CAM and Ablation-CAM performed very poorly, even in the detection of positive relevance. The retention of positive values alone in the latter two methods was responsible for the inaccurate detection of irrelevant locations as well. |
doi_str_mv | 10.1007/s00521-024-10273-4 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3110810521</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3110810521</sourcerecordid><originalsourceid>FETCH-LOGICAL-c1154-7b3116a170a6387b84805d8f19f4c5d1b31c8308dbddd6a01c111cc93f340a0a3</originalsourceid><addsrcrecordid>eNp9kMFOwzAQRC0EEqXwA5wicTbsxk7iSlxQVQpSJS5wthzHrlKVJHiTCm78Br_Hl-A2SNw47WHmzWqGsUuEawQobgggS5FDKjlCWgguj9gEpRBcQKaO2QRmMsq5FKfsjGgDADJX2YTdLt5711C9c4nbme1g-rptktYn9atZu8RuDVHtaxfo-_MrqZvehS64_mCjc3bizZbcxe-dspf7xfP8ga-elo_zuxW3iJnkRSkQc4MFmFyoolRSQVYpjzMvbVZhlK0SoKqyqqrcAEYMrZ0JLyQYMGLKrsbcLrRvg6Neb9ohNPGljsmgcF8-utLRZUNLFJzXXYgtwodG0PuV9LiSjivpw0paRkiMEEVzs3bhL_of6gcB12pn</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3110810521</pqid></control><display><type>article</type><title>Extensive evaluation of image classifiers’ interpretations</title><source>SpringerNature Complete Journals</source><creator>Poštić, Suraja ; Subašić, Marko</creator><creatorcontrib>Poštić, Suraja ; Subašić, Marko</creatorcontrib><description>Saliency maps are input-resolution matrices used for visualizing local interpretations of image classifiers. Their pixel values reflect the importance of corresponding image locations for the model’s decision. Despite numerous proposals on how to obtain such maps, their evaluation remains an open question. This paper presents a carefully designed experimental procedure along with a set of quantitative interpretation evaluation metrics that rely solely on the original model behavior. Previously noticed evaluation biases have been attenuated by separating locations with high and low values, considering the full saliency map resolution, and using classifiers with diverse accuracies and all the classes in the dataset. We used the proposed evaluation metrics to compare and analyze seven well-known interpretation methods. Our experiments confirm the importance of object background as well as negative saliency map pixels, and we show that the scale of their impact on the model is comparable to that of positive ones. We also demonstrate that a good class score interpretation does not necessarily imply a good probability interpretation. DeepLIFT and LRP-
ϵ
methods proved most successful altogether, while Grad-CAM and Ablation-CAM performed very poorly, even in the detection of positive relevance. The retention of positive values alone in the latter two methods was responsible for the inaccurate detection of irrelevant locations as well.</description><identifier>ISSN: 0941-0643</identifier><identifier>EISSN: 1433-3058</identifier><identifier>DOI: 10.1007/s00521-024-10273-4</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Ablation ; Artificial Intelligence ; Classification ; Computational Biology/Bioinformatics ; Computational Science and Engineering ; Computer Science ; Data Mining and Knowledge Discovery ; Decision making ; Experiments ; Image Processing and Computer Vision ; Impact analysis ; Methods ; Neural networks ; Original Article ; Pixels ; Probability and Statistics in Computer Science ; Salience ; Variables</subject><ispartof>Neural computing & applications, 2024-11, Vol.36 (33), p.20787-20805</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2024. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c1154-7b3116a170a6387b84805d8f19f4c5d1b31c8308dbddd6a01c111cc93f340a0a3</cites><orcidid>0000-0002-0675-4810</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00521-024-10273-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00521-024-10273-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Poštić, Suraja</creatorcontrib><creatorcontrib>Subašić, Marko</creatorcontrib><title>Extensive evaluation of image classifiers’ interpretations</title><title>Neural computing & applications</title><addtitle>Neural Comput & Applic</addtitle><description>Saliency maps are input-resolution matrices used for visualizing local interpretations of image classifiers. Their pixel values reflect the importance of corresponding image locations for the model’s decision. Despite numerous proposals on how to obtain such maps, their evaluation remains an open question. This paper presents a carefully designed experimental procedure along with a set of quantitative interpretation evaluation metrics that rely solely on the original model behavior. Previously noticed evaluation biases have been attenuated by separating locations with high and low values, considering the full saliency map resolution, and using classifiers with diverse accuracies and all the classes in the dataset. We used the proposed evaluation metrics to compare and analyze seven well-known interpretation methods. Our experiments confirm the importance of object background as well as negative saliency map pixels, and we show that the scale of their impact on the model is comparable to that of positive ones. We also demonstrate that a good class score interpretation does not necessarily imply a good probability interpretation. DeepLIFT and LRP-
ϵ
methods proved most successful altogether, while Grad-CAM and Ablation-CAM performed very poorly, even in the detection of positive relevance. The retention of positive values alone in the latter two methods was responsible for the inaccurate detection of irrelevant locations as well.</description><subject>Ablation</subject><subject>Artificial Intelligence</subject><subject>Classification</subject><subject>Computational Biology/Bioinformatics</subject><subject>Computational Science and Engineering</subject><subject>Computer Science</subject><subject>Data Mining and Knowledge Discovery</subject><subject>Decision making</subject><subject>Experiments</subject><subject>Image Processing and Computer Vision</subject><subject>Impact analysis</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Original Article</subject><subject>Pixels</subject><subject>Probability and Statistics in Computer Science</subject><subject>Salience</subject><subject>Variables</subject><issn>0941-0643</issn><issn>1433-3058</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNp9kMFOwzAQRC0EEqXwA5wicTbsxk7iSlxQVQpSJS5wthzHrlKVJHiTCm78Br_Hl-A2SNw47WHmzWqGsUuEawQobgggS5FDKjlCWgguj9gEpRBcQKaO2QRmMsq5FKfsjGgDADJX2YTdLt5711C9c4nbme1g-rptktYn9atZu8RuDVHtaxfo-_MrqZvehS64_mCjc3bizZbcxe-dspf7xfP8ga-elo_zuxW3iJnkRSkQc4MFmFyoolRSQVYpjzMvbVZhlK0SoKqyqqrcAEYMrZ0JLyQYMGLKrsbcLrRvg6Neb9ohNPGljsmgcF8-utLRZUNLFJzXXYgtwodG0PuV9LiSjivpw0paRkiMEEVzs3bhL_of6gcB12pn</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Poštić, Suraja</creator><creator>Subašić, Marko</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-0675-4810</orcidid></search><sort><creationdate>20241101</creationdate><title>Extensive evaluation of image classifiers’ interpretations</title><author>Poštić, Suraja ; Subašić, Marko</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c1154-7b3116a170a6387b84805d8f19f4c5d1b31c8308dbddd6a01c111cc93f340a0a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Ablation</topic><topic>Artificial Intelligence</topic><topic>Classification</topic><topic>Computational Biology/Bioinformatics</topic><topic>Computational Science and Engineering</topic><topic>Computer Science</topic><topic>Data Mining and Knowledge Discovery</topic><topic>Decision making</topic><topic>Experiments</topic><topic>Image Processing and Computer Vision</topic><topic>Impact analysis</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Original Article</topic><topic>Pixels</topic><topic>Probability and Statistics in Computer Science</topic><topic>Salience</topic><topic>Variables</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Poštić, Suraja</creatorcontrib><creatorcontrib>Subašić, Marko</creatorcontrib><collection>CrossRef</collection><jtitle>Neural computing & applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Poštić, Suraja</au><au>Subašić, Marko</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Extensive evaluation of image classifiers’ interpretations</atitle><jtitle>Neural computing & applications</jtitle><stitle>Neural Comput & Applic</stitle><date>2024-11-01</date><risdate>2024</risdate><volume>36</volume><issue>33</issue><spage>20787</spage><epage>20805</epage><pages>20787-20805</pages><issn>0941-0643</issn><eissn>1433-3058</eissn><abstract>Saliency maps are input-resolution matrices used for visualizing local interpretations of image classifiers. Their pixel values reflect the importance of corresponding image locations for the model’s decision. Despite numerous proposals on how to obtain such maps, their evaluation remains an open question. This paper presents a carefully designed experimental procedure along with a set of quantitative interpretation evaluation metrics that rely solely on the original model behavior. Previously noticed evaluation biases have been attenuated by separating locations with high and low values, considering the full saliency map resolution, and using classifiers with diverse accuracies and all the classes in the dataset. We used the proposed evaluation metrics to compare and analyze seven well-known interpretation methods. Our experiments confirm the importance of object background as well as negative saliency map pixels, and we show that the scale of their impact on the model is comparable to that of positive ones. We also demonstrate that a good class score interpretation does not necessarily imply a good probability interpretation. DeepLIFT and LRP-
ϵ
methods proved most successful altogether, while Grad-CAM and Ablation-CAM performed very poorly, even in the detection of positive relevance. The retention of positive values alone in the latter two methods was responsible for the inaccurate detection of irrelevant locations as well.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s00521-024-10273-4</doi><tpages>19</tpages><orcidid>https://orcid.org/0000-0002-0675-4810</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0941-0643 |
ispartof | Neural computing & applications, 2024-11, Vol.36 (33), p.20787-20805 |
issn | 0941-0643 1433-3058 |
language | eng |
recordid | cdi_proquest_journals_3110810521 |
source | SpringerNature Complete Journals |
subjects | Ablation Artificial Intelligence Classification Computational Biology/Bioinformatics Computational Science and Engineering Computer Science Data Mining and Knowledge Discovery Decision making Experiments Image Processing and Computer Vision Impact analysis Methods Neural networks Original Article Pixels Probability and Statistics in Computer Science Salience Variables |
title | Extensive evaluation of image classifiers’ interpretations |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T14%3A17%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Extensive%20evaluation%20of%20image%20classifiers%E2%80%99%20interpretations&rft.jtitle=Neural%20computing%20&%20applications&rft.au=Po%C5%A1ti%C4%87,%20Suraja&rft.date=2024-11-01&rft.volume=36&rft.issue=33&rft.spage=20787&rft.epage=20805&rft.pages=20787-20805&rft.issn=0941-0643&rft.eissn=1433-3058&rft_id=info:doi/10.1007/s00521-024-10273-4&rft_dat=%3Cproquest_cross%3E3110810521%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3110810521&rft_id=info:pmid/&rfr_iscdi=true |