Explainable nonlinear modelling of multiple time series with invertible neural networks
A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mapping...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | A method for nonlinear topology identification is proposed, based on the
assumption that a collection of time series are generated in two steps: i) a
vector autoregressive process in a latent space, and ii) a nonlinear,
component-wise, monotonically increasing observation mapping. The latter
mappings are assumed invertible, and are modelled as shallow neural networks,
so that their inverse can be numerically evaluated, and their parameters can be
learned using a technique inspired in deep learning. Due to the function
inversion, the back-propagation step is not straightforward, and this paper
explains the steps needed to calculate the gradients applying implicit
differentiation. Whereas the model explainability is the same as that for
linear VAR processes, preliminary numerical tests show that the prediction
error becomes smaller. |
---|---|
DOI: | 10.48550/arxiv.2107.00391 |