Explainable nonlinear modelling of multiple time series with invertible neural networks

A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mapping...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lopez-Ramos, Luis Miguel, Roy, Kevin, Beferull-Lozano, Baltasar
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lopez-Ramos, Luis Miguel
Roy, Kevin
Beferull-Lozano, Baltasar
description A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modelled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the back-propagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.
doi_str_mv 10.48550/arxiv.2107.00391
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2107_00391</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2107_00391</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-d3eaf4e638fbb0282779b4fc430cf259bcffc7a08436a409a88cfb2046a7d2193</originalsourceid><addsrcrecordid>eNotj8tKxDAYhbNxIaMP4Mq8QGtubdKlDOMFBtwMuCx_2j8aTNOSZi6-vbW6-g4czoGPkDvOSmWqij1AuvhTKTjTJWOy4dfkfXeZAvgINiCNYww-IiQ6jD2GJX_Q0dHhGLKflj77AemMyeNMzz5_Uh9PmLJft3hMEBbk85i-5hty5SDMePvPDTk87Q7bl2L_9vy6fdwXUGte9BLBKaylcdYyYYTWjVWuU5J1TlSN7ZzrNDCjZA2KNWBM56xgqgbdC97IDbn_u13N2in5AdJ3-2vYrobyB7fSTcQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explainable nonlinear modelling of multiple time series with invertible neural networks</title><source>arXiv.org</source><creator>Lopez-Ramos, Luis Miguel ; Roy, Kevin ; Beferull-Lozano, Baltasar</creator><creatorcontrib>Lopez-Ramos, Luis Miguel ; Roy, Kevin ; Beferull-Lozano, Baltasar</creatorcontrib><description>A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modelled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the back-propagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.</description><identifier>DOI: 10.48550/arxiv.2107.00391</identifier><language>eng</language><subject>Computer Science - Learning</subject><creationdate>2021-07</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2107.00391$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2107.00391$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lopez-Ramos, Luis Miguel</creatorcontrib><creatorcontrib>Roy, Kevin</creatorcontrib><creatorcontrib>Beferull-Lozano, Baltasar</creatorcontrib><title>Explainable nonlinear modelling of multiple time series with invertible neural networks</title><description>A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modelled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the back-propagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.</description><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tKxDAYhbNxIaMP4Mq8QGtubdKlDOMFBtwMuCx_2j8aTNOSZi6-vbW6-g4czoGPkDvOSmWqij1AuvhTKTjTJWOy4dfkfXeZAvgINiCNYww-IiQ6jD2GJX_Q0dHhGLKflj77AemMyeNMzz5_Uh9PmLJft3hMEBbk85i-5hty5SDMePvPDTk87Q7bl2L_9vy6fdwXUGte9BLBKaylcdYyYYTWjVWuU5J1TlSN7ZzrNDCjZA2KNWBM56xgqgbdC97IDbn_u13N2in5AdJ3-2vYrobyB7fSTcQ</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Lopez-Ramos, Luis Miguel</creator><creator>Roy, Kevin</creator><creator>Beferull-Lozano, Baltasar</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210701</creationdate><title>Explainable nonlinear modelling of multiple time series with invertible neural networks</title><author>Lopez-Ramos, Luis Miguel ; Roy, Kevin ; Beferull-Lozano, Baltasar</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-d3eaf4e638fbb0282779b4fc430cf259bcffc7a08436a409a88cfb2046a7d2193</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lopez-Ramos, Luis Miguel</creatorcontrib><creatorcontrib>Roy, Kevin</creatorcontrib><creatorcontrib>Beferull-Lozano, Baltasar</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lopez-Ramos, Luis Miguel</au><au>Roy, Kevin</au><au>Beferull-Lozano, Baltasar</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explainable nonlinear modelling of multiple time series with invertible neural networks</atitle><date>2021-07-01</date><risdate>2021</risdate><abstract>A method for nonlinear topology identification is proposed, based on the assumption that a collection of time series are generated in two steps: i) a vector autoregressive process in a latent space, and ii) a nonlinear, component-wise, monotonically increasing observation mapping. The latter mappings are assumed invertible, and are modelled as shallow neural networks, so that their inverse can be numerically evaluated, and their parameters can be learned using a technique inspired in deep learning. Due to the function inversion, the back-propagation step is not straightforward, and this paper explains the steps needed to calculate the gradients applying implicit differentiation. Whereas the model explainability is the same as that for linear VAR processes, preliminary numerical tests show that the prediction error becomes smaller.</abstract><doi>10.48550/arxiv.2107.00391</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2107.00391
ispartof
issn
language eng
recordid cdi_arxiv_primary_2107_00391
source arXiv.org
subjects Computer Science - Learning
title Explainable nonlinear modelling of multiple time series with invertible neural networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-09T13%3A46%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explainable%20nonlinear%20modelling%20of%20multiple%20time%20series%20with%20invertible%20neural%20networks&rft.au=Lopez-Ramos,%20Luis%20Miguel&rft.date=2021-07-01&rft_id=info:doi/10.48550/arxiv.2107.00391&rft_dat=%3Carxiv_GOX%3E2107_00391%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true