Evaluation of Instance-Based Explanations: An In-Depth Analysis of Counterfactual Evaluation Metrics, Challenges, and the CEval Toolkit

In eXplainable Artificial Intelligence (XAI), instance-based explanations have gained importance as a method for illuminating complex models by highlighting differences or similarities between the samples and their explanations. The evaluation of these explanations is crucial for assessing their qua...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.137683-137695
Hauptverfasser: Bayrak, Betul, Bach, Kerstin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In eXplainable Artificial Intelligence (XAI), instance-based explanations have gained importance as a method for illuminating complex models by highlighting differences or similarities between the samples and their explanations. The evaluation of these explanations is crucial for assessing their quality and effectiveness. However, the quantitative evaluation of instance-based explanation methods reveals inconsistencies and variations in terminology and metrics. Addressing this, our survey provides a unified notation for instance-based explanation evaluation metrics for instance-based explanations with a particular focus on counterfactual explanations. Further, it explores associated trade-offs, identifies areas for improvement, and offers a practical Python toolkit, CEval. Key contributions include a survey of quantitative evaluation metrics, facilitating practical counterfactual evaluation with the package, and providing insights into explanation evaluation limitations and future directions.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3410540