XpertAI: uncovering model strategies for sub-manifolds
In recent years, Explainable AI (XAI) methods have facilitated profound validation and knowledge extraction from ML models. While extensively studied for classification, few XAI solutions have addressed the challenges specific to regression models. In regression, explanations need to be precisely fo...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In recent years, Explainable AI (XAI) methods have facilitated profound
validation and knowledge extraction from ML models. While extensively studied
for classification, few XAI solutions have addressed the challenges specific to
regression models. In regression, explanations need to be precisely formulated
to address specific user queries (e.g.\ distinguishing between `Why is the
output above 0?' and `Why is the output above 50?'). They should furthermore
reflect the model's behavior on the relevant data sub-manifold. In this paper,
we introduce XpertAI, a framework that disentangles the prediction strategy
into multiple range-specific sub-strategies and allows the formulation of
precise queries about the model (the `explanandum') as a linear combination of
those sub-strategies. XpertAI is formulated generally to work alongside popular
XAI attribution techniques, based on occlusion, gradient integration, or
reverse propagation. Qualitative and quantitative results, demonstrate the
benefits of our approach. |
---|---|
DOI: | 10.48550/arxiv.2403.07486 |