A Trustworthy Counterfactual Explanation Method With Latent Space Smoothing

Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2024, Vol.33, p.4584-4599
Hauptverfasser: Li, Yan, Cai, Xia, Wu, Chunwei, Lin, Xiao, Cao, Guitao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Despite the large-scale adoption of Artificial Intelligence (AI) models in healthcare, there is an urgent need for trustworthy tools to rigorously backtrack the model decisions so that they behave reliably. Counterfactual explanations take a counter-intuitive approach to allow users to explore "what if" scenarios gradually becoming popular in the trustworthy field. However, most previous work on model's counterfactual explanation cannot generate in-distribution attribution credibly, produces adversarial examples, or fails to give a confidence interval for the explanation. Hence, in this paper, we propose a novel approach that generates counterfactuals in locally smooth directed semantic embedding space, and at the same time gives an uncertainty estimate in the counterfactual generation process. Specifically, we identify low-dimensional directed semantic embedding space based on Principal Component Analysis (PCA) applied in differential generative model. Then, we propose latent space smoothing regularization to rectify counterfactual search within in-distribution, such that visually-imperceptible changes are more robust to adversarial perturbations. Moreover, we put forth an uncertainty estimation framework for evaluating counterfactual uncertainty. Extensive experiments on several challenging realistic Chest X-ray and CelebA datasets show that our approach performs consistently well and better than the existing several state-of-the-art baseline approaches.
ISSN:1057-7149
1941-0042
1941-0042
DOI:10.1109/TIP.2024.3442614