A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing
Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on image processing 2024, Vol.33, p.4584-4599 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 4599 |
---|---|
container_issue | |
container_start_page | 4584 |
container_title | IEEE transactions on image processing |
container_volume | 33 |
creator | Li, Yan Cai, Xia Wu, Chunwei Lin, Xiao Cao, Guitao |
description | Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "what if" scenarios gradually becoming popular in the trustworthy field. However, most previous work on model's counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches. |
doi_str_mv | 10.1109/TIP.2024.3442614 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_39159026</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10639340</ieee_id><sourcerecordid>3094820335</sourcerecordid><originalsourceid>FETCH-LOGICAL-c231t-e7939c8c650780e0573b9729398ac463f45ba3c0a32670aff608f34489087c853</originalsourceid><addsrcrecordid>eNpdkE1rGzEQhkVIyVd77yGEhVxyWXek0erjaEyShrqkYJceF1nR1mvslSNpSfzvo8VuCTmNGJ55efUQ8pXCiFLQ3-YPv0YMGB8h50xQfkTOqOa0BODsOL-hkqWkXJ-S8xhXAJRXVJyQU9S00sDEGfkxLuahj-nFh7TcFRPfd8mFxtjUm3Vx-7pdm86k1nfFT5eW_qn406ZlMTXJdamYbY11xWzjfVq23d_P5FNj1tF9OcwL8vvudj75Xk4f7x8m42lpGdJUOqlRW2VFBVKByx1xoSXLS2UsF9jwamHQgkEmJJimEaCa_EGlQUmrKrwgN_vcbfDPvYup3rTRunWu6nwfawTNFQPEAb3-gK58H7rcbqAEKuRSZgr2lA0-xuCaehvajQm7mkI9iK6z6HoQXR9E55OrQ3C_2Lin_wf_zGbgcg-0zrl3eQI1csA3ZO9_gQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3096383477</pqid></control><display><type>article</type><title>A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing</title><source>IEEE Electronic Library (IEL)</source><creator>Li, Yan ; Cai, Xia ; Wu, Chunwei ; Lin, Xiao ; Cao, Guitao</creator><creatorcontrib>Li, Yan ; Cai, Xia ; Wu, Chunwei ; Lin, Xiao ; Cao, Guitao</creatorcontrib><description>Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "what if" scenarios gradually becoming popular in the trustworthy field. However, most previous work on model's counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches.</description><identifier>ISSN: 1057-7149</identifier><identifier>ISSN: 1941-0042</identifier><identifier>EISSN: 1941-0042</identifier><identifier>DOI: 10.1109/TIP.2024.3442614</identifier><identifier>PMID: 39159026</identifier><identifier>CODEN: IIPRE4</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>Artificial intelligence ; Closed box ; Confidence intervals ; counterfactual explanation ; Dimensional analysis ; disentanglement representation ; Embedding ; image processing ; Noise ; Principal components analysis ; Regularization ; Robustness ; Semantics ; Smoothing ; Smoothing methods ; State-of-the-art reviews ; Trustworthiness ; Trustworthy AI ; Uncertainty ; X-ray imaging</subject><ispartof>IEEE transactions on image processing, 2024, Vol.33, p.4584-4599</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c231t-e7939c8c650780e0573b9729398ac463f45ba3c0a32670aff608f34489087c853</cites><orcidid>0000-0001-6209-3714 ; 0000-0002-4059-4806 ; 0000-0002-8403-7614 ; 0000-0002-8805-7129</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10639340$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,4024,27923,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10639340$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/39159026$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Li, Yan</creatorcontrib><creatorcontrib>Cai, Xia</creatorcontrib><creatorcontrib>Wu, Chunwei</creatorcontrib><creatorcontrib>Lin, Xiao</creatorcontrib><creatorcontrib>Cao, Guitao</creatorcontrib><title>A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing</title><title>IEEE transactions on image processing</title><addtitle>TIP</addtitle><addtitle>IEEE Trans Image Process</addtitle><description>Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "what if" scenarios gradually becoming popular in the trustworthy field. However, most previous work on model's counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches.</description><subject>Artificial intelligence</subject><subject>Closed box</subject><subject>Confidence intervals</subject><subject>counterfactual explanation</subject><subject>Dimensional analysis</subject><subject>disentanglement representation</subject><subject>Embedding</subject><subject>image processing</subject><subject>Noise</subject><subject>Principal components analysis</subject><subject>Regularization</subject><subject>Robustness</subject><subject>Semantics</subject><subject>Smoothing</subject><subject>Smoothing methods</subject><subject>State-of-the-art reviews</subject><subject>Trustworthiness</subject><subject>Trustworthy AI</subject><subject>Uncertainty</subject><subject>X-ray imaging</subject><issn>1057-7149</issn><issn>1941-0042</issn><issn>1941-0042</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpdkE1rGzEQhkVIyVd77yGEhVxyWXek0erjaEyShrqkYJceF1nR1mvslSNpSfzvo8VuCTmNGJ55efUQ8pXCiFLQ3-YPv0YMGB8h50xQfkTOqOa0BODsOL-hkqWkXJ-S8xhXAJRXVJyQU9S00sDEGfkxLuahj-nFh7TcFRPfd8mFxtjUm3Vx-7pdm86k1nfFT5eW_qn406ZlMTXJdamYbY11xWzjfVq23d_P5FNj1tF9OcwL8vvudj75Xk4f7x8m42lpGdJUOqlRW2VFBVKByx1xoSXLS2UsF9jwamHQgkEmJJimEaCa_EGlQUmrKrwgN_vcbfDPvYup3rTRunWu6nwfawTNFQPEAb3-gK58H7rcbqAEKuRSZgr2lA0-xuCaehvajQm7mkI9iK6z6HoQXR9E55OrQ3C_2Lin_wf_zGbgcg-0zrl3eQI1csA3ZO9_gQ</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Li, Yan</creator><creator>Cai, Xia</creator><creator>Wu, Chunwei</creator><creator>Lin, Xiao</creator><creator>Cao, Guitao</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-6209-3714</orcidid><orcidid>https://orcid.org/0000-0002-4059-4806</orcidid><orcidid>https://orcid.org/0000-0002-8403-7614</orcidid><orcidid>https://orcid.org/0000-0002-8805-7129</orcidid></search><sort><creationdate>2024</creationdate><title>A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing</title><author>Li, Yan ; Cai, Xia ; Wu, Chunwei ; Lin, Xiao ; Cao, Guitao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c231t-e7939c8c650780e0573b9729398ac463f45ba3c0a32670aff608f34489087c853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Artificial intelligence</topic><topic>Closed box</topic><topic>Confidence intervals</topic><topic>counterfactual explanation</topic><topic>Dimensional analysis</topic><topic>disentanglement representation</topic><topic>Embedding</topic><topic>image processing</topic><topic>Noise</topic><topic>Principal components analysis</topic><topic>Regularization</topic><topic>Robustness</topic><topic>Semantics</topic><topic>Smoothing</topic><topic>Smoothing methods</topic><topic>State-of-the-art reviews</topic><topic>Trustworthiness</topic><topic>Trustworthy AI</topic><topic>Uncertainty</topic><topic>X-ray imaging</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Li, Yan</creatorcontrib><creatorcontrib>Cai, Xia</creatorcontrib><creatorcontrib>Wu, Chunwei</creatorcontrib><creatorcontrib>Lin, Xiao</creatorcontrib><creatorcontrib>Cao, Guitao</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE transactions on image processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Li, Yan</au><au>Cai, Xia</au><au>Wu, Chunwei</au><au>Lin, Xiao</au><au>Cao, Guitao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing</atitle><jtitle>IEEE transactions on image processing</jtitle><stitle>TIP</stitle><addtitle>IEEE Trans Image Process</addtitle><date>2024</date><risdate>2024</risdate><volume>33</volume><spage>4584</spage><epage>4599</epage><pages>4584-4599</pages><issn>1057-7149</issn><issn>1941-0042</issn><eissn>1941-0042</eissn><coden>IIPRE4</coden><abstract>Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "what if" scenarios gradually becoming popular in the trustworthy field. However, most previous work on model's counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>39159026</pmid><doi>10.1109/TIP.2024.3442614</doi><tpages>16</tpages><orcidid>https://orcid.org/0000-0001-6209-3714</orcidid><orcidid>https://orcid.org/0000-0002-4059-4806</orcidid><orcidid>https://orcid.org/0000-0002-8403-7614</orcidid><orcidid>https://orcid.org/0000-0002-8805-7129</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1057-7149 |
ispartof | IEEE transactions on image processing, 2024, Vol.33, p.4584-4599 |
issn | 1057-7149 1941-0042 1941-0042 |
language | eng |
recordid | cdi_pubmed_primary_39159026 |
source | IEEE Electronic Library (IEL) |
subjects | Artificial intelligence Closed box Confidence intervals counterfactual explanation Dimensional analysis disentanglement representation Embedding image processing Noise Principal components analysis Regularization Robustness Semantics Smoothing Smoothing methods State-of-the-art reviews Trustworthiness Trustworthy AI Uncertainty X-ray imaging |
title | A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T16%3A04%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Trustworthy%20Counterfactual%20Explanation%20Method%20With%20Latent%20Space%20Smoothing&rft.jtitle=IEEE%20transactions%20on%20image%20processing&rft.au=Li,%20Yan&rft.date=2024&rft.volume=33&rft.spage=4584&rft.epage=4599&rft.pages=4584-4599&rft.issn=1057-7149&rft.eissn=1941-0042&rft.coden=IIPRE4&rft_id=info:doi/10.1109/TIP.2024.3442614&rft_dat=%3Cproquest_RIE%3E3094820335%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3096383477&rft_id=info:pmid/39159026&rft_ieee_id=10639340&rfr_iscdi=true |