PCAIME: Principal Component Analysis-Enhanced Approximate Inverse Model Explanations Through Dimensional Decomposition and Expansion

Complex "black-box" artificial intelligence (AI) models are interpreted using interpretive machine learning and explainable AI (XAI); therefore, assessing the importance of global and local features is crucial. The previously proposed approximate inverse model explanation (AIME) offers uni...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.121093-121113
1. Verfasser: Nakanishi, Takafumi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Complex "black-box" artificial intelligence (AI) models are interpreted using interpretive machine learning and explainable AI (XAI); therefore, assessing the importance of global and local features is crucial. The previously proposed approximate inverse model explanation (AIME) offers unified explanations of global and local feature importance. This study builds on that foundation by focusing on assessing feature contributions while also examining the multicollinearity and correlation among features in XAI-derived explanations. Given that advanced AI and machine learning models inherently manage multicollinearity and correlations among features, XAI methods must be employed to clearly explain these dynamics and fully understand the estimation results and behaviors of the models. This study proposes a new technique called principal component analysis-enhanced approximate inverse model explanation (PCAIME) that extends AIME and implements dimensionality decomposition and expansion capabilities, such as PCA. PCAIME derives contributing features, demonstrates the multicollinearity and correlation between features and their contributions through a two-dimensional heat map of principal components, and reveals selected features after dimensionality reduction. Experiments using wine quality and automobile mile-per-gallon datasets were conducted to compare the effectiveness of local interpretable model-agnostic explanations, AIME, and PCAIME, particularly in analyzing local feature importance. PCAIME outperformed its counterparts by effectively revealing feature correlations and providing a more comprehensive perspective of feature interactions. Significantly, PCAIME estimated the global and local feature importance and offered novel insights by simultaneously visualizing feature correlations through heat maps. PCAIME could improve the understanding of complex algorithms and datasets, promoting transparent AI and machine learning in healthcare, finance, and public policy.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3450299