Valid P-Value for Deep Learning-Driven Salient Region
Various saliency map methods have been proposed to interpret and explain predictions of deep learning models. Saliency maps allow us to interpret which parts of the input signals have a strong influence on the prediction results. However, since a saliency map is obtained by complex computations in d...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Various saliency map methods have been proposed to interpret and explain
predictions of deep learning models. Saliency maps allow us to interpret which
parts of the input signals have a strong influence on the prediction results.
However, since a saliency map is obtained by complex computations in deep
learning models, it is often difficult to know how reliable the saliency map
itself is. In this study, we propose a method to quantify the reliability of a
salient region in the form of p-values. Our idea is to consider a salient
region as a selected hypothesis by the trained deep learning model and employ
the selective inference framework. The proposed method can provably control the
probability of false positive detections of salient regions. We demonstrate the
validity of the proposed method through numerical examples in synthetic and
real datasets. Furthermore, we develop a Keras-based framework for conducting
the proposed selective inference for a wide class of CNNs without additional
implementation cost. |
---|---|
DOI: | 10.48550/arxiv.2301.02437 |