A copula-based visualization technique for a neural network
Interpretability of machine learning is defined as the extent to which humans can comprehend the reason of a decision. However, a neural network is not considered interpretable due to the ambiguity in its decision-making process. Therefore, in this study, we propose a new algorithm that reveals whic...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Interpretability of machine learning is defined as the extent to which humans
can comprehend the reason of a decision. However, a neural network is not
considered interpretable due to the ambiguity in its decision-making process.
Therefore, in this study, we propose a new algorithm that reveals which feature
values the trained neural network considers important and which paths are
mainly traced in the process of decision-making. In the proposed algorithm, the
score estimated by the correlation coefficients between the neural network
layers that can be calculated by applying the concept of a pair copula was
defined. We compared the estimated score with the feature importance values of
Random Forest, which is sometimes regarded as a highly interpretable algorithm,
in the experiment and confirmed that the results were consistent with each
other. This algorithm suggests an approach for compressing a neural network and
its parameter tuning because the algorithm identifies the paths that contribute
to the classification or prediction results. |
---|---|
DOI: | 10.48550/arxiv.2003.12317 |