A Causal Lens for Peeking into Black Box Predictive Models: Predictive Model Interpretation via Causal Attribution
With the increasing adoption of predictive models trained using machine learning across a wide range of high-stakes applications, e.g., health care, security, criminal justice, finance, and education, there is a growing need for effective techniques for explaining such models and their predictions....
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | With the increasing adoption of predictive models trained using machine
learning across a wide range of high-stakes applications, e.g., health care,
security, criminal justice, finance, and education, there is a growing need for
effective techniques for explaining such models and their predictions. We aim
to address this problem in settings where the predictive model is a black box;
That is, we can only observe the response of the model to various inputs, but
have no knowledge about the internal structure of the predictive model, its
parameters, the objective function, and the algorithm used to optimize the
model. We reduce the problem of interpreting a black box predictive model to
that of estimating the causal effects of each of the model inputs on the model
output, from observations of the model inputs and the corresponding outputs. We
estimate the causal effects of model inputs on model output using variants of
the Rubin Neyman potential outcomes framework for estimating causal effects
from observational data. We show how the resulting causal attribution of
responsibility for model output to the different model inputs can be used to
interpret the predictive model and to explain its predictions. We present
results of experiments that demonstrate the effectiveness of our approach to
the interpretation of black box predictive models via causal attribution in the
case of deep neural network models trained on one synthetic data set (where the
input variables that impact the output variable are known by design) and two
real-world data sets: Handwritten digit classification, and Parkinson's disease
severity prediction. Because our approach does not require knowledge about the
predictive model algorithm and is free of assumptions regarding the black box
predictive model except that its input-output responses be observable, it can
be applied, in principle, to any black box predictive model. |
---|---|
DOI: | 10.48550/arxiv.2008.00357 |