Computing and evaluating saliency maps for image classification: a tutorial
Facing the black-box nature of deep learning models for image classification, a popular trend in the literature proposes methods to generate explanations in the form of heat maps indicating the areas that played an important role in the models’ decisions. Such explanations are called saliency maps a...
Gespeichert in:
Veröffentlicht in: | Journal of electronic imaging 2023-03, Vol.32 (2), p.020801-020801 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 020801 |
---|---|
container_issue | 2 |
container_start_page | 020801 |
container_title | Journal of electronic imaging |
container_volume | 32 |
creator | Gomez, Tristan Mouchère, Harold |
description | Facing the black-box nature of deep learning models for image classification, a popular trend in the literature proposes methods to generate explanations in the form of heat maps indicating the areas that played an important role in the models’ decisions. Such explanations are called saliency maps and constitute an active field of research, given that many fundamental questions are yet to be answered: how to compute them efficiently? How to evaluate them? What exactly can they be used for? Given the increasing rate at which papers are produced and the vast amount of literature that is already existing, we propose our study to help newcomers become part of this community and to contribute to the research field. First, the two existing approaches to generate saliency maps are discussed, namely post-hoc methods and attention models. Post-hoc methods are generic algorithms that can be applied to any model from a given class without requiring fine-tuning. On the contrary, attention models are ad-hoc architectures that generate a saliency map during the inference phase to guide the decision. We show that both approaches can be divided into several subcategories and illustrate each of them with one important model or method. Second, we present the current methodologies used to evaluate saliency maps, including objective and subjective protocols, depending on whether or not they involve users. Among objective methods, we notably detail faithfulness metrics and propose an implementation featuring the faithfulness metrics discussed in this paper (https://github.com/TristanGomez44/metrics-saliency-maps). |
doi_str_mv | 10.1117/1.JEI.32.2.020801 |
format | Article |
fullrecord | <record><control><sourceid>hal_cross</sourceid><recordid>TN_cdi_hal_primary_oai_HAL_hal_04082936v1</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>oai_HAL_hal_04082936v1</sourcerecordid><originalsourceid>FETCH-LOGICAL-c313t-d7007bebeb0423fb059629b5ed933aa9399015ebb688822bed79a601255f71733</originalsourceid><addsrcrecordid>eNp9kE1Lw0AQhhdRsFZ_gLe9ekic2W0-1lspta0WvCh4WybJpm5Jk5BNCv33bo14lDnMB-8zzLyM3SOEiJg8Yviy3IRShCIEASngBZtgFEMghPq89DVgEigF6prdOLcHQExnOGGvi-bQDr2td5zqgpsjVQP9tI4qa-r8xA_UOl42HbcH2hmeV-ScLW3uZU39xIn3Q990lqpbdlVS5czdb56yj-fl-2IdbN9Wm8V8G-QSZR8UCUCSGR8wE7LMIFKxUFlkCiUlkZL-TIxMlsVpmgqRmSJRFAOKKCoTTKScsodx7xdVuu38Wd1JN2T1er7V5xnMIBVKxkf0Why1edc415nyD0DQZ-c0au-clkILPTrnmXBkXGuN3jdDV_t3_gG-AT1NbqI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Computing and evaluating saliency maps for image classification: a tutorial</title><source>SPIE Digital Library (Journals)</source><creator>Gomez, Tristan ; Mouchère, Harold</creator><creatorcontrib>Gomez, Tristan ; Mouchère, Harold</creatorcontrib><description>Facing the black-box nature of deep learning models for image classification, a popular trend in the literature proposes methods to generate explanations in the form of heat maps indicating the areas that played an important role in the models’ decisions. Such explanations are called saliency maps and constitute an active field of research, given that many fundamental questions are yet to be answered: how to compute them efficiently? How to evaluate them? What exactly can they be used for? Given the increasing rate at which papers are produced and the vast amount of literature that is already existing, we propose our study to help newcomers become part of this community and to contribute to the research field. First, the two existing approaches to generate saliency maps are discussed, namely post-hoc methods and attention models. Post-hoc methods are generic algorithms that can be applied to any model from a given class without requiring fine-tuning. On the contrary, attention models are ad-hoc architectures that generate a saliency map during the inference phase to guide the decision. We show that both approaches can be divided into several subcategories and illustrate each of them with one important model or method. Second, we present the current methodologies used to evaluate saliency maps, including objective and subjective protocols, depending on whether or not they involve users. Among objective methods, we notably detail faithfulness metrics and propose an implementation featuring the faithfulness metrics discussed in this paper (https://github.com/TristanGomez44/metrics-saliency-maps).</description><identifier>ISSN: 1017-9909</identifier><identifier>EISSN: 1560-229X</identifier><identifier>DOI: 10.1117/1.JEI.32.2.020801</identifier><language>eng</language><publisher>Society of Photo-Optical Instrumentation Engineers</publisher><subject>Artificial Intelligence ; Computer Science</subject><ispartof>Journal of electronic imaging, 2023-03, Vol.32 (2), p.020801-020801</ispartof><rights>2023 Society of Photo-Optical Instrumentation Engineers (SPIE)</rights><rights>Distributed under a Creative Commons Attribution 4.0 International License</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c313t-d7007bebeb0423fb059629b5ed933aa9399015ebb688822bed79a601255f71733</cites><orcidid>0000-0002-0182-4760 ; 0000-0001-6220-7216</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.spiedigitallibrary.org/journalArticle/Download?urlId=10.1117/1.JEI.32.2.020801$$EPDF$$P50$$Gspie$$Hfree_for_read</linktopdf><linktohtml>$$Uhttp://www.dx.doi.org/10.1117/1.JEI.32.2.020801$$EHTML$$P50$$Gspie$$Hfree_for_read</linktohtml><link.rule.ids>230,314,776,780,881,24022,27901,27902,55354,55355</link.rule.ids><backlink>$$Uhttps://hal.science/hal-04082936$$DView record in HAL$$Hfree_for_read</backlink></links><search><creatorcontrib>Gomez, Tristan</creatorcontrib><creatorcontrib>Mouchère, Harold</creatorcontrib><title>Computing and evaluating saliency maps for image classification: a tutorial</title><title>Journal of electronic imaging</title><addtitle>J. Electron. Imaging</addtitle><description>Facing the black-box nature of deep learning models for image classification, a popular trend in the literature proposes methods to generate explanations in the form of heat maps indicating the areas that played an important role in the models’ decisions. Such explanations are called saliency maps and constitute an active field of research, given that many fundamental questions are yet to be answered: how to compute them efficiently? How to evaluate them? What exactly can they be used for? Given the increasing rate at which papers are produced and the vast amount of literature that is already existing, we propose our study to help newcomers become part of this community and to contribute to the research field. First, the two existing approaches to generate saliency maps are discussed, namely post-hoc methods and attention models. Post-hoc methods are generic algorithms that can be applied to any model from a given class without requiring fine-tuning. On the contrary, attention models are ad-hoc architectures that generate a saliency map during the inference phase to guide the decision. We show that both approaches can be divided into several subcategories and illustrate each of them with one important model or method. Second, we present the current methodologies used to evaluate saliency maps, including objective and subjective protocols, depending on whether or not they involve users. Among objective methods, we notably detail faithfulness metrics and propose an implementation featuring the faithfulness metrics discussed in this paper (https://github.com/TristanGomez44/metrics-saliency-maps).</description><subject>Artificial Intelligence</subject><subject>Computer Science</subject><issn>1017-9909</issn><issn>1560-229X</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kE1Lw0AQhhdRsFZ_gLe9ekic2W0-1lspta0WvCh4WybJpm5Jk5BNCv33bo14lDnMB-8zzLyM3SOEiJg8Yviy3IRShCIEASngBZtgFEMghPq89DVgEigF6prdOLcHQExnOGGvi-bQDr2td5zqgpsjVQP9tI4qa-r8xA_UOl42HbcH2hmeV-ScLW3uZU39xIn3Q990lqpbdlVS5czdb56yj-fl-2IdbN9Wm8V8G-QSZR8UCUCSGR8wE7LMIFKxUFlkCiUlkZL-TIxMlsVpmgqRmSJRFAOKKCoTTKScsodx7xdVuu38Wd1JN2T1er7V5xnMIBVKxkf0Why1edc415nyD0DQZ-c0au-clkILPTrnmXBkXGuN3jdDV_t3_gG-AT1NbqI</recordid><startdate>20230301</startdate><enddate>20230301</enddate><creator>Gomez, Tristan</creator><creator>Mouchère, Harold</creator><general>Society of Photo-Optical Instrumentation Engineers</general><general>SPIE and IS&T</general><scope>AAYXX</scope><scope>CITATION</scope><scope>1XC</scope><orcidid>https://orcid.org/0000-0002-0182-4760</orcidid><orcidid>https://orcid.org/0000-0001-6220-7216</orcidid></search><sort><creationdate>20230301</creationdate><title>Computing and evaluating saliency maps for image classification: a tutorial</title><author>Gomez, Tristan ; Mouchère, Harold</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c313t-d7007bebeb0423fb059629b5ed933aa9399015ebb688822bed79a601255f71733</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial Intelligence</topic><topic>Computer Science</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Gomez, Tristan</creatorcontrib><creatorcontrib>Mouchère, Harold</creatorcontrib><collection>CrossRef</collection><collection>Hyper Article en Ligne (HAL)</collection><jtitle>Journal of electronic imaging</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Gomez, Tristan</au><au>Mouchère, Harold</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Computing and evaluating saliency maps for image classification: a tutorial</atitle><jtitle>Journal of electronic imaging</jtitle><addtitle>J. Electron. Imaging</addtitle><date>2023-03-01</date><risdate>2023</risdate><volume>32</volume><issue>2</issue><spage>020801</spage><epage>020801</epage><pages>020801-020801</pages><issn>1017-9909</issn><eissn>1560-229X</eissn><abstract>Facing the black-box nature of deep learning models for image classification, a popular trend in the literature proposes methods to generate explanations in the form of heat maps indicating the areas that played an important role in the models’ decisions. Such explanations are called saliency maps and constitute an active field of research, given that many fundamental questions are yet to be answered: how to compute them efficiently? How to evaluate them? What exactly can they be used for? Given the increasing rate at which papers are produced and the vast amount of literature that is already existing, we propose our study to help newcomers become part of this community and to contribute to the research field. First, the two existing approaches to generate saliency maps are discussed, namely post-hoc methods and attention models. Post-hoc methods are generic algorithms that can be applied to any model from a given class without requiring fine-tuning. On the contrary, attention models are ad-hoc architectures that generate a saliency map during the inference phase to guide the decision. We show that both approaches can be divided into several subcategories and illustrate each of them with one important model or method. Second, we present the current methodologies used to evaluate saliency maps, including objective and subjective protocols, depending on whether or not they involve users. Among objective methods, we notably detail faithfulness metrics and propose an implementation featuring the faithfulness metrics discussed in this paper (https://github.com/TristanGomez44/metrics-saliency-maps).</abstract><pub>Society of Photo-Optical Instrumentation Engineers</pub><doi>10.1117/1.JEI.32.2.020801</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-0182-4760</orcidid><orcidid>https://orcid.org/0000-0001-6220-7216</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1017-9909 |
ispartof | Journal of electronic imaging, 2023-03, Vol.32 (2), p.020801-020801 |
issn | 1017-9909 1560-229X |
language | eng |
recordid | cdi_hal_primary_oai_HAL_hal_04082936v1 |
source | SPIE Digital Library (Journals) |
subjects | Artificial Intelligence Computer Science |
title | Computing and evaluating saliency maps for image classification: a tutorial |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-14T00%3A43%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-hal_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Computing%20and%20evaluating%20saliency%20maps%20for%20image%20classification:%20a%20tutorial&rft.jtitle=Journal%20of%20electronic%20imaging&rft.au=Gomez,%20Tristan&rft.date=2023-03-01&rft.volume=32&rft.issue=2&rft.spage=020801&rft.epage=020801&rft.pages=020801-020801&rft.issn=1017-9909&rft.eissn=1560-229X&rft_id=info:doi/10.1117/1.JEI.32.2.020801&rft_dat=%3Chal_cross%3Eoai_HAL_hal_04082936v1%3C/hal_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |