Explaining Probabilistic Models with Distributional Values
A large branch of explainable machine learning is grounded in cooperative game theory. However, research indicates that game-theoretic explanations may mislead or be hard to interpret. We argue that often there is a critical mismatch between what one wishes to explain (e.g. the output of a classifie...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A large branch of explainable machine learning is grounded in cooperative
game theory. However, research indicates that game-theoretic explanations may
mislead or be hard to interpret. We argue that often there is a critical
mismatch between what one wishes to explain (e.g. the output of a classifier)
and what current methods such as SHAP explain (e.g. the scalar probability of a
class). This paper addresses such gap for probabilistic models by generalising
cooperative games and value operators. We introduce the distributional values,
random variables that track changes in the model output (e.g. flipping of the
predicted class) and derive their analytic expressions for games with Gaussian,
Bernoulli and Categorical payoffs. We further establish several characterising
properties, and show that our framework provides fine-grained and insightful
explanations with case studies on vision and language models. |
---|---|
DOI: | 10.48550/arxiv.2402.09947 |