Rethinking interpretation: Input-agnostic saliency mapping of deep visual classifiers
Saliency methods provide post-hoc model interpretation by attributing input features to the model outputs. Current methods mainly achieve this using a single input sample, thereby failing to answer input-independent inquiries about the model. We also show that input-specific saliency mapping is intr...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Saliency methods provide post-hoc model interpretation by attributing input
features to the model outputs. Current methods mainly achieve this using a
single input sample, thereby failing to answer input-independent inquiries
about the model. We also show that input-specific saliency mapping is
intrinsically susceptible to misleading feature attribution. Current attempts
to use 'general' input features for model interpretation assume access to a
dataset containing those features, which biases the interpretation. Addressing
the gap, we introduce a new perspective of input-agnostic saliency mapping that
computationally estimates the high-level features attributed by the model to
its outputs. These features are geometrically correlated, and are computed by
accumulating model's gradient information with respect to an unrestricted data
distribution. To compute these features, we nudge independent data points over
the model loss surface towards the local minima associated by a
human-understandable concept, e.g., class label for classifiers. With a
systematic projection, scaling and refinement process, this information is
transformed into an interpretable visualization without compromising its
model-fidelity. The visualization serves as a stand-alone qualitative
interpretation. With an extensive evaluation, we not only demonstrate
successful visualizations for a variety of concepts for large-scale models, but
also showcase an interesting utility of this new form of saliency mapping by
identifying backdoor signatures in compromised classifiers. |
---|---|
DOI: | 10.48550/arxiv.2303.17836 |