CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency

Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-insensitivity of the earlier layers in a network only allows saliency computation with low resolution activation maps of the deeper layers, res...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Jalwana, Mohammad A. A. K, Akhtar, Naveed, Bennamoun, Mohammed, Mian, Ajmal
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Jalwana, Mohammad A. A. K
Akhtar, Naveed
Bennamoun, Mohammed
Mian, Ajmal
description Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-insensitivity of the earlier layers in a network only allows saliency computation with low resolution activation maps of the deeper layers, resulting in compromised image saliency. Remedifying this can lead to sanity failures. We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors and preserving the map sanity. Our method systematically performs multi-scale accumulation and fusion of the activation maps and backpropagated gradients to compute precise saliency maps. From accurate image saliency to articulation of relative importance of input features for different models, and precise discrimination between model perception of visually similar objects, our high-resolution mapping offers multiple novel insights into the black-box deep visual models, which are presented in the paper. We also demonstrate the utility of our saliency maps in adversarial setup by drastically reducing the norm of attack signals by focusing them on the precise regions identified by our maps. Our method also inspires new evaluation metrics and a sanity check for this developing research direction. Code is available here https://github.com/VisMIL/CAMERAS
doi_str_mv 10.48550/arxiv.2106.10649
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2106_10649</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2106_10649</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-4a66b98a02c7f6d34249e2647f8deb523722db8bf22a297078e7f34a111429543</originalsourceid><addsrcrecordid>eNotj01ugzAUhL3pokp7gK7qC0Dh2di4O4Toj5SoUpJ10QPs1BI1yKao3L4JzWI0ixmN5iPkIU1inmdZ8oT-184xpImIz-LqlnyWxa7aF4dnWrkvdK3u6F6Hof-Z7OBo4Tp6QGenhY5eB-1n60607DEEWrSTnXGt7XAcL4EZPLXfeNI0YG-1a5c7cmOwD_r-6htyfKmO5Vu0_Xh9L4tthEKqiKMQjcoxgVYa0TEOXGkQXJq8000GTAJ0Td4YAAQlE5lraRjHNE05qIyzDXn8n10B69GfX_ilvoDWKyj7A1L8TSY</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency</title><source>arXiv.org</source><creator>Jalwana, Mohammad A. A. K ; Akhtar, Naveed ; Bennamoun, Mohammed ; Mian, Ajmal</creator><creatorcontrib>Jalwana, Mohammad A. A. K ; Akhtar, Naveed ; Bennamoun, Mohammed ; Mian, Ajmal</creatorcontrib><description>Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-insensitivity of the earlier layers in a network only allows saliency computation with low resolution activation maps of the deeper layers, resulting in compromised image saliency. Remedifying this can lead to sanity failures. We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors and preserving the map sanity. Our method systematically performs multi-scale accumulation and fusion of the activation maps and backpropagated gradients to compute precise saliency maps. From accurate image saliency to articulation of relative importance of input features for different models, and precise discrimination between model perception of visually similar objects, our high-resolution mapping offers multiple novel insights into the black-box deep visual models, which are presented in the paper. We also demonstrate the utility of our saliency maps in adversarial setup by drastically reducing the norm of attack signals by focusing them on the precise regions identified by our maps. Our method also inspires new evaluation metrics and a sanity check for this developing research direction. Code is available here https://github.com/VisMIL/CAMERAS</description><identifier>DOI: 10.48550/arxiv.2106.10649</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Learning</subject><creationdate>2021-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2106.10649$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2106.10649$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Jalwana, Mohammad A. A. K</creatorcontrib><creatorcontrib>Akhtar, Naveed</creatorcontrib><creatorcontrib>Bennamoun, Mohammed</creatorcontrib><creatorcontrib>Mian, Ajmal</creatorcontrib><title>CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency</title><description>Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-insensitivity of the earlier layers in a network only allows saliency computation with low resolution activation maps of the deeper layers, resulting in compromised image saliency. Remedifying this can lead to sanity failures. We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors and preserving the map sanity. Our method systematically performs multi-scale accumulation and fusion of the activation maps and backpropagated gradients to compute precise saliency maps. From accurate image saliency to articulation of relative importance of input features for different models, and precise discrimination between model perception of visually similar objects, our high-resolution mapping offers multiple novel insights into the black-box deep visual models, which are presented in the paper. We also demonstrate the utility of our saliency maps in adversarial setup by drastically reducing the norm of attack signals by focusing them on the precise regions identified by our maps. Our method also inspires new evaluation metrics and a sanity check for this developing research direction. Code is available here https://github.com/VisMIL/CAMERAS</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj01ugzAUhL3pokp7gK7qC0Dh2di4O4Toj5SoUpJ10QPs1BI1yKao3L4JzWI0ixmN5iPkIU1inmdZ8oT-184xpImIz-LqlnyWxa7aF4dnWrkvdK3u6F6Hof-Z7OBo4Tp6QGenhY5eB-1n60607DEEWrSTnXGt7XAcL4EZPLXfeNI0YG-1a5c7cmOwD_r-6htyfKmO5Vu0_Xh9L4tthEKqiKMQjcoxgVYa0TEOXGkQXJq8000GTAJ0Td4YAAQlE5lraRjHNE05qIyzDXn8n10B69GfX_ilvoDWKyj7A1L8TSY</recordid><startdate>20210620</startdate><enddate>20210620</enddate><creator>Jalwana, Mohammad A. A. K</creator><creator>Akhtar, Naveed</creator><creator>Bennamoun, Mohammed</creator><creator>Mian, Ajmal</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210620</creationdate><title>CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency</title><author>Jalwana, Mohammad A. A. K ; Akhtar, Naveed ; Bennamoun, Mohammed ; Mian, Ajmal</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-4a66b98a02c7f6d34249e2647f8deb523722db8bf22a297078e7f34a111429543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Jalwana, Mohammad A. A. K</creatorcontrib><creatorcontrib>Akhtar, Naveed</creatorcontrib><creatorcontrib>Bennamoun, Mohammed</creatorcontrib><creatorcontrib>Mian, Ajmal</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Jalwana, Mohammad A. A. K</au><au>Akhtar, Naveed</au><au>Bennamoun, Mohammed</au><au>Mian, Ajmal</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency</atitle><date>2021-06-20</date><risdate>2021</risdate><abstract>Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input. However, class-insensitivity of the earlier layers in a network only allows saliency computation with low resolution activation maps of the deeper layers, resulting in compromised image saliency. Remedifying this can lead to sanity failures. We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors and preserving the map sanity. Our method systematically performs multi-scale accumulation and fusion of the activation maps and backpropagated gradients to compute precise saliency maps. From accurate image saliency to articulation of relative importance of input features for different models, and precise discrimination between model perception of visually similar objects, our high-resolution mapping offers multiple novel insights into the black-box deep visual models, which are presented in the paper. We also demonstrate the utility of our saliency maps in adversarial setup by drastically reducing the norm of attack signals by focusing them on the precise regions identified by our maps. Our method also inspires new evaluation metrics and a sanity check for this developing research direction. Code is available here https://github.com/VisMIL/CAMERAS</abstract><doi>10.48550/arxiv.2106.10649</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2106.10649
ispartof
issn
language eng
recordid cdi_arxiv_primary_2106_10649
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
Computer Science - Learning
title CAMERAS: Enhanced Resolution And Sanity preserving Class Activation Mapping for image saliency
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T02%3A51%3A59IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CAMERAS:%20Enhanced%20Resolution%20And%20Sanity%20preserving%20Class%20Activation%20Mapping%20for%20image%20saliency&rft.au=Jalwana,%20Mohammad%20A.%20A.%20K&rft.date=2021-06-20&rft_id=info:doi/10.48550/arxiv.2106.10649&rft_dat=%3Carxiv_GOX%3E2106_10649%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true