Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation
Explainability poses a major challenge to artificial intelligence (AI) techniques. Current studies on explainable AI (XAI) lack the efficiency of extracting global knowledge about the learning task, thus suffer deficiencies such as imprecise saliency, context-aware absence and vague meaning. In this...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Xie, Ruitao Chen, Jingbang Jiang, Limai Xiao, Rui Pan, Yi Cai, Yunpeng |
description | Explainability poses a major challenge to artificial intelligence (AI)
techniques. Current studies on explainable AI (XAI) lack the efficiency of
extracting global knowledge about the learning task, thus suffer deficiencies
such as imprecise saliency, context-aware absence and vague meaning. In this
paper, we propose the class association embedding (CAE) approach to address
these issues. We employ an encoder-decoder architecture to embed sample
features and separate them into class-related and individual-related style
vectors simultaneously. Recombining the individual-style code of a given sample
with the class-style code of another leads to a synthetic sample with preserved
individual characters but changed class assignment, following a cyclic
adversarial learning strategy. Class association embedding distills the global
class-related features of all instances into a unified domain with well
separation between classes. The transition rules between different classes can
be then extracted and further employed to individual instances. We then propose
an active XAI framework which manipulates the class-style vector of a certain
sample along guided paths towards the counter-classes, resulting in a series of
counter-example synthetic samples with identical individual characters.
Comparing these counterfactual samples with the original ones provides a
global, intuitive illustration to the nature of the classification tasks. We
adopt the framework on medical image classification tasks, which show that more
precise saliency maps with powerful context-aware representation can be
achieved compared with existing methods. Moreover, the disease pathology can be
directly visualized via traversing the paths in the class-style space. |
doi_str_mv | 10.48550/arxiv.2306.07306 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2306_07306</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2306_07306</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-40f7e24f5879a951a7115fa40789ea2fd9ed7582bdc3b40ca9862c8b9a44f40a3</originalsourceid><addsrcrecordid>eNotkLFugzAYhFk6VGkfoFP9AlADBuMRIUojUXXJjn7bvyNLxkR2hMLbN6Fd7pb7TrpLkrecZqypKvoB4WbXrChpnVF-1-dka9XVrkgGt0hwbiP97eLAepAOyYgQvPVnYpZAvlFbBY4cZzhjJKsF0jmIkbQxLsrC1S6e9LNErR8IeE26TTmrSKtXDBGCvdMDegx79iV5MuAivv77ITl99qfuKx1_hmPXjinUvE4ZNRwLZqqGCxBVDjzPKwOM8kYgFEYL1LxqCqlVKRlVIJq6UI0UwJhhFMpD8v5Xu2-fLsHOELbp8cG0f1D-AiKTWHM</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation</title><source>arXiv.org</source><creator>Xie, Ruitao ; Chen, Jingbang ; Jiang, Limai ; Xiao, Rui ; Pan, Yi ; Cai, Yunpeng</creator><creatorcontrib>Xie, Ruitao ; Chen, Jingbang ; Jiang, Limai ; Xiao, Rui ; Pan, Yi ; Cai, Yunpeng</creatorcontrib><description>Explainability poses a major challenge to artificial intelligence (AI)
techniques. Current studies on explainable AI (XAI) lack the efficiency of
extracting global knowledge about the learning task, thus suffer deficiencies
such as imprecise saliency, context-aware absence and vague meaning. In this
paper, we propose the class association embedding (CAE) approach to address
these issues. We employ an encoder-decoder architecture to embed sample
features and separate them into class-related and individual-related style
vectors simultaneously. Recombining the individual-style code of a given sample
with the class-style code of another leads to a synthetic sample with preserved
individual characters but changed class assignment, following a cyclic
adversarial learning strategy. Class association embedding distills the global
class-related features of all instances into a unified domain with well
separation between classes. The transition rules between different classes can
be then extracted and further employed to individual instances. We then propose
an active XAI framework which manipulates the class-style vector of a certain
sample along guided paths towards the counter-classes, resulting in a series of
counter-example synthetic samples with identical individual characters.
Comparing these counterfactual samples with the original ones provides a
global, intuitive illustration to the nature of the classification tasks. We
adopt the framework on medical image classification tasks, which show that more
precise saliency maps with powerful context-aware representation can be
achieved compared with existing methods. Moreover, the disease pathology can be
directly visualized via traversing the paths in the class-style space.</description><identifier>DOI: 10.48550/arxiv.2306.07306</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computer Vision and Pattern Recognition</subject><creationdate>2023-06</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2306.07306$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2306.07306$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Xie, Ruitao</creatorcontrib><creatorcontrib>Chen, Jingbang</creatorcontrib><creatorcontrib>Jiang, Limai</creatorcontrib><creatorcontrib>Xiao, Rui</creatorcontrib><creatorcontrib>Pan, Yi</creatorcontrib><creatorcontrib>Cai, Yunpeng</creatorcontrib><title>Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation</title><description>Explainability poses a major challenge to artificial intelligence (AI)
techniques. Current studies on explainable AI (XAI) lack the efficiency of
extracting global knowledge about the learning task, thus suffer deficiencies
such as imprecise saliency, context-aware absence and vague meaning. In this
paper, we propose the class association embedding (CAE) approach to address
these issues. We employ an encoder-decoder architecture to embed sample
features and separate them into class-related and individual-related style
vectors simultaneously. Recombining the individual-style code of a given sample
with the class-style code of another leads to a synthetic sample with preserved
individual characters but changed class assignment, following a cyclic
adversarial learning strategy. Class association embedding distills the global
class-related features of all instances into a unified domain with well
separation between classes. The transition rules between different classes can
be then extracted and further employed to individual instances. We then propose
an active XAI framework which manipulates the class-style vector of a certain
sample along guided paths towards the counter-classes, resulting in a series of
counter-example synthetic samples with identical individual characters.
Comparing these counterfactual samples with the original ones provides a
global, intuitive illustration to the nature of the classification tasks. We
adopt the framework on medical image classification tasks, which show that more
precise saliency maps with powerful context-aware representation can be
achieved compared with existing methods. Moreover, the disease pathology can be
directly visualized via traversing the paths in the class-style space.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computer Vision and Pattern Recognition</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotkLFugzAYhFk6VGkfoFP9AlADBuMRIUojUXXJjn7bvyNLxkR2hMLbN6Fd7pb7TrpLkrecZqypKvoB4WbXrChpnVF-1-dka9XVrkgGt0hwbiP97eLAepAOyYgQvPVnYpZAvlFbBY4cZzhjJKsF0jmIkbQxLsrC1S6e9LNErR8IeE26TTmrSKtXDBGCvdMDegx79iV5MuAivv77ITl99qfuKx1_hmPXjinUvE4ZNRwLZqqGCxBVDjzPKwOM8kYgFEYL1LxqCqlVKRlVIJq6UI0UwJhhFMpD8v5Xu2-fLsHOELbp8cG0f1D-AiKTWHM</recordid><startdate>20230612</startdate><enddate>20230612</enddate><creator>Xie, Ruitao</creator><creator>Chen, Jingbang</creator><creator>Jiang, Limai</creator><creator>Xiao, Rui</creator><creator>Pan, Yi</creator><creator>Cai, Yunpeng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230612</creationdate><title>Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation</title><author>Xie, Ruitao ; Chen, Jingbang ; Jiang, Limai ; Xiao, Rui ; Pan, Yi ; Cai, Yunpeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-40f7e24f5879a951a7115fa40789ea2fd9ed7582bdc3b40ca9862c8b9a44f40a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computer Vision and Pattern Recognition</topic><toplevel>online_resources</toplevel><creatorcontrib>Xie, Ruitao</creatorcontrib><creatorcontrib>Chen, Jingbang</creatorcontrib><creatorcontrib>Jiang, Limai</creatorcontrib><creatorcontrib>Xiao, Rui</creatorcontrib><creatorcontrib>Pan, Yi</creatorcontrib><creatorcontrib>Cai, Yunpeng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Xie, Ruitao</au><au>Chen, Jingbang</au><au>Jiang, Limai</au><au>Xiao, Rui</au><au>Pan, Yi</au><au>Cai, Yunpeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation</atitle><date>2023-06-12</date><risdate>2023</risdate><abstract>Explainability poses a major challenge to artificial intelligence (AI)
techniques. Current studies on explainable AI (XAI) lack the efficiency of
extracting global knowledge about the learning task, thus suffer deficiencies
such as imprecise saliency, context-aware absence and vague meaning. In this
paper, we propose the class association embedding (CAE) approach to address
these issues. We employ an encoder-decoder architecture to embed sample
features and separate them into class-related and individual-related style
vectors simultaneously. Recombining the individual-style code of a given sample
with the class-style code of another leads to a synthetic sample with preserved
individual characters but changed class assignment, following a cyclic
adversarial learning strategy. Class association embedding distills the global
class-related features of all instances into a unified domain with well
separation between classes. The transition rules between different classes can
be then extracted and further employed to individual instances. We then propose
an active XAI framework which manipulates the class-style vector of a certain
sample along guided paths towards the counter-classes, resulting in a series of
counter-example synthetic samples with identical individual characters.
Comparing these counterfactual samples with the original ones provides a
global, intuitive illustration to the nature of the classification tasks. We
adopt the framework on medical image classification tasks, which show that more
precise saliency maps with powerful context-aware representation can be
achieved compared with existing methods. Moreover, the disease pathology can be
directly visualized via traversing the paths in the class-style space.</abstract><doi>10.48550/arxiv.2306.07306</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2306.07306 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2306_07306 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Computer Vision and Pattern Recognition |
title | Active Globally Explainable Learning for Medical Images via Class Association Embedding and Cyclic Adversarial Generation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T08%3A09%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Active%20Globally%20Explainable%20Learning%20for%20Medical%20Images%20via%20Class%20Association%20Embedding%20and%20Cyclic%20Adversarial%20Generation&rft.au=Xie,%20Ruitao&rft.date=2023-06-12&rft_id=info:doi/10.48550/arxiv.2306.07306&rft_dat=%3Carxiv_GOX%3E2306_07306%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |