On fine-grained visual explanation in convolutional neural networks

Existing explanation methods for Convolutional Neural Networks (CNNs) lack the pixel-level visualization explanations to generate the reliable fine-grained decision features. since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted, we propose a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Digital communications and networks 2023-10, Vol.9 (5), p.1141-1147
Hauptverfasser: Lei, Xia, Fan, Yongkai, Luo, Xiong-Lin
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Existing explanation methods for Convolutional Neural Networks (CNNs) lack the pixel-level visualization explanations to generate the reliable fine-grained decision features. since there are inconsistencies between the explanation and the actual behavior of the model to be interpreted, we propose a Fine-Grained Visual Explanation for CNN, namely F-GVE, which produces a fine-grained explanation with higher consistency to the decision of the original model. The exact backward class-specific gradients with respect to the input image is obtained to highlight the object-related pixels the model used to make prediction. In addition, for better visualization and less noise, F-GVE selects an appropriate threshold to filter the gradient during the calculation and the explanation map is obtained by element-wise multiplying the gradient and the input image to show fine-grained classification decision features. Experimental results demonstrate that F-GVE has good visual performances and highlights the importance of fine-grained decision features. Moreover, the faithfulness of the explanation in this paper is higher and it is effective and practical on troubleshooting and debugging detection.
ISSN:2352-8648
2468-5925
2352-8648
DOI:10.1016/j.dcan.2022.12.012