Sanity Simulations for Saliency Methods
Saliency methods are a popular class of feature attribution explanation methods that aim to capture a model's predictive reasoning by identifying "important" pixels in an input image. However, the development and adoption of these methods are hindered by the lack of access to ground-t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Saliency methods are a popular class of feature attribution explanation
methods that aim to capture a model's predictive reasoning by identifying
"important" pixels in an input image. However, the development and adoption of
these methods are hindered by the lack of access to ground-truth model
reasoning, which prevents accurate evaluation. In this work, we design a
synthetic benchmarking framework, SMERF, that allows us to perform
ground-truth-based evaluation while controlling the complexity of the model's
reasoning. Experimentally, SMERF reveals significant limitations in existing
saliency methods and, as a result, represents a useful tool for the development
of new saliency methods. |
---|---|
DOI: | 10.48550/arxiv.2105.06506 |