TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes
Partial Differential Equations are infinite dimensional encoded representations of physical processes. However, imbibing multiple observation data towards a coupled representation presents significant challenges. We present a fully convolutional architecture that captures the invariant structure of...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Partial Differential Equations are infinite dimensional encoded
representations of physical processes. However, imbibing multiple observation
data towards a coupled representation presents significant challenges. We
present a fully convolutional architecture that captures the invariant
structure of the domain to reconstruct the observable system. The proposed
architecture is significantly low-weight compared to other networks for such
problems. Our intent is to learn coupled dynamic processes interpreted as
deviations from true kernels representing isolated processes for
model-adaptivity. Experimental analysis shows that our architecture is robust
and transparent in capturing process kernels and system anomalies. We also show
that high weights representation is not only redundant but also impacts network
interpretability. Our design is guided by domain knowledge, with isolated
process representations serving as ground truths for verification. These allow
us to identify redundant kernels and their manifestations in activation maps to
guide better designs that are both interpretable and explainable unlike
traditional deep-nets. |
---|---|
DOI: | 10.48550/arxiv.2003.02426 |