A copula-based visualization technique for a neural network
Interpretability of machine learning is defined as the extent to which humans can comprehend the reason of a decision. However, a neural network is not considered interpretable due to the ambiguity in its decision-making process. Therefore, in this study, we propose a new algorithm that reveals whic...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2020-03 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Interpretability of machine learning is defined as the extent to which humans can comprehend the reason of a decision. However, a neural network is not considered interpretable due to the ambiguity in its decision-making process. Therefore, in this study, we propose a new algorithm that reveals which feature values the trained neural network considers important and which paths are mainly traced in the process of decision-making. In the proposed algorithm, the score estimated by the correlation coefficients between the neural network layers that can be calculated by applying the concept of a pair copula was defined. We compared the estimated score with the feature importance values of Random Forest, which is sometimes regarded as a highly interpretable algorithm, in the experiment and confirmed that the results were consistent with each other. This algorithm suggests an approach for compressing a neural network and its parameter tuning because the algorithm identifies the paths that contribute to the classification or prediction results. |
---|---|
ISSN: | 2331-8422 |