Explainable Multi-View Deep Networks Methodology for Experimental Physics
Physical experiments often involve multiple imaging representations, such as X-ray scans and microscopic images. Deep learning models have been widely used for supervised analysis in these experiments. Combining different image representations is frequently required to analyze and make a decision pr...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Physical experiments often involve multiple imaging representations, such as
X-ray scans and microscopic images. Deep learning models have been widely used
for supervised analysis in these experiments. Combining different image
representations is frequently required to analyze and make a decision properly.
Consequently, multi-view data has emerged - datasets where each sample is
described by views from different angles, sources, or modalities. These
problems are addressed with the concept of multi-view learning. Understanding
the decision-making process of deep learning models is essential for reliable
and credible analysis. Hence, many explainability methods have been devised
recently. Nonetheless, there is a lack of proper explainability in multi-view
models, which are challenging to explain due to their architectures. In this
paper, we suggest different multi-view architectures for the vision domain,
each suited to another problem, and we also present a methodology for
explaining these models. To demonstrate the effectiveness of our methodology,
we focus on the domain of High Energy Density Physics (HEDP) experiments, where
multiple imaging representations are used to assess the quality of foam
samples. We apply our methodology to classify the foam samples quality using
the suggested multi-view architectures. Through experimental results, we
showcase the improvement of accurate architecture choice on both accuracy - 78%
to 84% and AUC - 83% to 93% and present a trade-off between performance and
explainability. Specifically, we demonstrate that our approach enables the
explanation of individual one-view models, providing insights into the
decision-making process of each view. This understanding enhances the
interpretability of the overall multi-view model. The sources of this work are
available at:
https://github.com/Scientific-Computing-Lab-NRCN/Multi-View-Explainability. |
---|---|
DOI: | 10.48550/arxiv.2308.08206 |