CRAFT: Concept Recursive Activation FacTorization for Explainability

Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023 Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, re...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Fel, Thomas, Picard, Agustin, Bethune, Louis, Boissin, Thibaut, Vigouroux, David, Colin, Julien, Cadène, Rémi, Serre, Thomas
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Fel, Thomas
Picard, Agustin
Bethune, Louis
Boissin, Thibaut
Vigouroux, David
Colin, Julien
Cadène, Rémi
Serre, Thomas
description Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023 Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT -- a novel approach to identify both "what" and "where" by generating concept-based explanations. We introduce 3 new ingredients to the automatic concept extraction literature: (i) a recursive strategy to detect and decompose concepts across layers, (ii) a novel method for a more faithful estimation of concept importance using Sobol indices, and (iii) the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach. We show that the proposed concept importance estimation technique is more faithful to the model than previous methods. When evaluating the usefulness of the method for human experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. Our code is freely available at github.com/deel-ai/Craft.
doi_str_mv 10.48550/arxiv.2211.10154
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2211_10154</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2211_10154</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-be236baf3e1ee1717347101d7f5112799ef5e0b1aa09b98f511bcb0c213b66873</originalsourceid><addsrcrecordid>eNotj8FqwzAQRHXJoST9gJ6qH7CjlSzLzs24cVsIFILvZqWuQODaRnFN0q9vk_Q0zBwe8xh7ApFmhdZii_EcllRKgBQE6OyBvdTHqml3vB4HR9PMj-S-4yksxCs3hwXnMA68QdeOMfzcmx8j35-nHsOANvRhvmzYymN_osf_XLO22bf1W3L4eH2vq0OCuckSS1LlFr0iIAIDRmXm78Wn8RpAmrIkr0lYQBSlLYvrap0VToKyeV4YtWbPd-xNo5ti-MJ46a463U1H_QIGkESv</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CRAFT: Concept Recursive Activation FacTorization for Explainability</title><source>arXiv.org</source><creator>Fel, Thomas ; Picard, Agustin ; Bethune, Louis ; Boissin, Thibaut ; Vigouroux, David ; Colin, Julien ; Cadène, Rémi ; Serre, Thomas</creator><creatorcontrib>Fel, Thomas ; Picard, Agustin ; Bethune, Louis ; Boissin, Thibaut ; Vigouroux, David ; Colin, Julien ; Cadène, Rémi ; Serre, Thomas</creatorcontrib><description>Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023 Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT -- a novel approach to identify both "what" and "where" by generating concept-based explanations. We introduce 3 new ingredients to the automatic concept extraction literature: (i) a recursive strategy to detect and decompose concepts across layers, (ii) a novel method for a more faithful estimation of concept importance using Sobol indices, and (iii) the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach. We show that the proposed concept importance estimation technique is more faithful to the model than previous methods. When evaluating the usefulness of the method for human experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. Our code is freely available at github.com/deel-ai/Craft.</description><identifier>DOI: 10.48550/arxiv.2211.10154</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2022-11</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2211.10154$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2211.10154$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Fel, Thomas</creatorcontrib><creatorcontrib>Picard, Agustin</creatorcontrib><creatorcontrib>Bethune, Louis</creatorcontrib><creatorcontrib>Boissin, Thibaut</creatorcontrib><creatorcontrib>Vigouroux, David</creatorcontrib><creatorcontrib>Colin, Julien</creatorcontrib><creatorcontrib>Cadène, Rémi</creatorcontrib><creatorcontrib>Serre, Thomas</creatorcontrib><title>CRAFT: Concept Recursive Activation FacTorization for Explainability</title><description>Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023 Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT -- a novel approach to identify both "what" and "where" by generating concept-based explanations. We introduce 3 new ingredients to the automatic concept extraction literature: (i) a recursive strategy to detect and decompose concepts across layers, (ii) a novel method for a more faithful estimation of concept importance using Sobol indices, and (iii) the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach. We show that the proposed concept importance estimation technique is more faithful to the model than previous methods. When evaluating the usefulness of the method for human experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. Our code is freely available at github.com/deel-ai/Craft.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FqwzAQRHXJoST9gJ6qH7CjlSzLzs24cVsIFILvZqWuQODaRnFN0q9vk_Q0zBwe8xh7ApFmhdZii_EcllRKgBQE6OyBvdTHqml3vB4HR9PMj-S-4yksxCs3hwXnMA68QdeOMfzcmx8j35-nHsOANvRhvmzYymN_osf_XLO22bf1W3L4eH2vq0OCuckSS1LlFr0iIAIDRmXm78Wn8RpAmrIkr0lYQBSlLYvrap0VToKyeV4YtWbPd-xNo5ti-MJ46a463U1H_QIGkESv</recordid><startdate>20221117</startdate><enddate>20221117</enddate><creator>Fel, Thomas</creator><creator>Picard, Agustin</creator><creator>Bethune, Louis</creator><creator>Boissin, Thibaut</creator><creator>Vigouroux, David</creator><creator>Colin, Julien</creator><creator>Cadène, Rémi</creator><creator>Serre, Thomas</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20221117</creationdate><title>CRAFT: Concept Recursive Activation FacTorization for Explainability</title><author>Fel, Thomas ; Picard, Agustin ; Bethune, Louis ; Boissin, Thibaut ; Vigouroux, David ; Colin, Julien ; Cadène, Rémi ; Serre, Thomas</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-be236baf3e1ee1717347101d7f5112799ef5e0b1aa09b98f511bcb0c213b66873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Fel, Thomas</creatorcontrib><creatorcontrib>Picard, Agustin</creatorcontrib><creatorcontrib>Bethune, Louis</creatorcontrib><creatorcontrib>Boissin, Thibaut</creatorcontrib><creatorcontrib>Vigouroux, David</creatorcontrib><creatorcontrib>Colin, Julien</creatorcontrib><creatorcontrib>Cadène, Rémi</creatorcontrib><creatorcontrib>Serre, Thomas</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Fel, Thomas</au><au>Picard, Agustin</au><au>Bethune, Louis</au><au>Boissin, Thibaut</au><au>Vigouroux, David</au><au>Colin, Julien</au><au>Cadène, Rémi</au><au>Serre, Thomas</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CRAFT: Concept Recursive Activation FacTorization for Explainability</atitle><date>2022-11-17</date><risdate>2022</risdate><abstract>Proceedings of the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 2023 Attribution methods, which employ heatmaps to identify the most influential regions of an image that impact model decisions, have gained widespread popularity as a type of explainability method. However, recent research has exposed the limited practical value of these methods, attributed in part to their narrow focus on the most prominent regions of an image -- revealing "where" the model looks, but failing to elucidate "what" the model sees in those areas. In this work, we try to fill in this gap with CRAFT -- a novel approach to identify both "what" and "where" by generating concept-based explanations. We introduce 3 new ingredients to the automatic concept extraction literature: (i) a recursive strategy to detect and decompose concepts across layers, (ii) a novel method for a more faithful estimation of concept importance using Sobol indices, and (iii) the use of implicit differentiation to unlock Concept Attribution Maps. We conduct both human and computer vision experiments to demonstrate the benefits of the proposed approach. We show that the proposed concept importance estimation technique is more faithful to the model than previous methods. When evaluating the usefulness of the method for human experimenters on a human-centered utility benchmark, we find that our approach significantly improves on two of the three test scenarios. Our code is freely available at github.com/deel-ai/Craft.</abstract><doi>10.48550/arxiv.2211.10154</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2211.10154
ispartof
issn
language eng
recordid cdi_arxiv_primary_2211_10154
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computer Vision and Pattern Recognition
title CRAFT: Concept Recursive Activation FacTorization for Explainability
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T13%3A06%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CRAFT:%20Concept%20Recursive%20Activation%20FacTorization%20for%20Explainability&rft.au=Fel,%20Thomas&rft.date=2022-11-17&rft_id=info:doi/10.48550/arxiv.2211.10154&rft_dat=%3Carxiv_GOX%3E2211_10154%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true