On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters

Kalman filter is a key tool for time-series forecasting and analysis. We show that the dependence of a prediction of Kalman filter on the past is decaying exponentially, whenever the process noise is non-degenerate. Therefore, Kalman filter may be approximated by regression on a few recent observati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2018-09
Hauptverfasser: Kozdoba, Mark, Marecek, Jakub, Tchrakian, Tigran, Mannor, Shie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Kozdoba, Mark
Marecek, Jakub
Tchrakian, Tigran
Mannor, Shie
description Kalman filter is a key tool for time-series forecasting and analysis. We show that the dependence of a prediction of Kalman filter on the past is decaying exponentially, whenever the process noise is non-degenerate. Therefore, Kalman filter may be approximated by regression on a few recent observations. Surprisingly, we also show that having some process noise is essential for the exponential decay. With no process noise, it may happen that the forecast depends on all of the past uniformly, which makes forecasting more difficult. Based on this insight, we devise an on-line algorithm for improper learning of a linear dynamical system (LDS), which considers only a few most recent observations. We use our decay results to provide the first regret bounds w.r.t. to Kalman filters within learning an LDS. That is, we compare the results of our algorithm to the best, in hindsight, Kalman filter for a given signal. Also, the algorithm is practical: its per-update run-time is linear in the regression depth.
doi_str_mv 10.48550/arxiv.1809.05870
format Article
fullrecord <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_1809_05870</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108713744</sourcerecordid><originalsourceid>FETCH-LOGICAL-a524-b5a92bd7ea8bb0e51b49a41fc8bc15af5db67f56ec687b94cea95c472e7f54933</originalsourceid><addsrcrecordid>eNotj01Lw0AURQdBsNT-AFcOuE6dz8zEndS2ioEuLG7Dm3RSpiSTOJNK--9NWlcXDvc97kHogZK50FKSZwgn9zunmmRzIrUiN2jCOKeJFozdoVmMB0IISxWTkk_Q98YnufMW5xaCd36P2wqPAAJ-O3toXAk1_jrH3jbxBS9PXeut790AV23Y274fb5zHn1A34PHK1b0N8R7dVlBHO_vPKdqultvFe5Jv1h-L1zwByURiJGTM7JQFbQyxkhqRgaBVqU1JJVRyZ1JVydSWqVYmE6WFTJZCMTtQkXE-RY_XtxfpoguugXAuRvniIj80nq6NLrQ_Rxv74tAegx82FYwSrShXQvA_2qxdQw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2108713744</pqid></control><display><type>article</type><title>On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Kozdoba, Mark ; Marecek, Jakub ; Tchrakian, Tigran ; Mannor, Shie</creator><creatorcontrib>Kozdoba, Mark ; Marecek, Jakub ; Tchrakian, Tigran ; Mannor, Shie</creatorcontrib><description>Kalman filter is a key tool for time-series forecasting and analysis. We show that the dependence of a prediction of Kalman filter on the past is decaying exponentially, whenever the process noise is non-degenerate. Therefore, Kalman filter may be approximated by regression on a few recent observations. Surprisingly, we also show that having some process noise is essential for the exponential decay. With no process noise, it may happen that the forecast depends on all of the past uniformly, which makes forecasting more difficult. Based on this insight, we devise an on-line algorithm for improper learning of a linear dynamical system (LDS), which considers only a few most recent observations. We use our decay results to provide the first regret bounds w.r.t. to Kalman filters within learning an LDS. That is, we compare the results of our algorithm to the best, in hindsight, Kalman filter for a given signal. Also, the algorithm is practical: its per-update run-time is linear in the regression depth.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.1809.05870</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Computer Science - Artificial Intelligence ; Computer Science - Learning ; Decay ; Dependence ; Dynamical systems ; Forecasting ; Kalman filters ; Machine learning ; Mathematics - Optimization and Control ; Mathematics - Statistics Theory ; Noise ; On-line systems ; Regression analysis ; Statistics - Theory</subject><ispartof>arXiv.org, 2018-09</ispartof><rights>2018. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,784,885,27925</link.rule.ids><backlink>$$Uhttps://doi.org/10.1609/aaai.v33i01.33014098$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.48550/arXiv.1809.05870$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kozdoba, Mark</creatorcontrib><creatorcontrib>Marecek, Jakub</creatorcontrib><creatorcontrib>Tchrakian, Tigran</creatorcontrib><creatorcontrib>Mannor, Shie</creatorcontrib><title>On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters</title><title>arXiv.org</title><description>Kalman filter is a key tool for time-series forecasting and analysis. We show that the dependence of a prediction of Kalman filter on the past is decaying exponentially, whenever the process noise is non-degenerate. Therefore, Kalman filter may be approximated by regression on a few recent observations. Surprisingly, we also show that having some process noise is essential for the exponential decay. With no process noise, it may happen that the forecast depends on all of the past uniformly, which makes forecasting more difficult. Based on this insight, we devise an on-line algorithm for improper learning of a linear dynamical system (LDS), which considers only a few most recent observations. We use our decay results to provide the first regret bounds w.r.t. to Kalman filters within learning an LDS. That is, we compare the results of our algorithm to the best, in hindsight, Kalman filter for a given signal. Also, the algorithm is practical: its per-update run-time is linear in the regression depth.</description><subject>Algorithms</subject><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Decay</subject><subject>Dependence</subject><subject>Dynamical systems</subject><subject>Forecasting</subject><subject>Kalman filters</subject><subject>Machine learning</subject><subject>Mathematics - Optimization and Control</subject><subject>Mathematics - Statistics Theory</subject><subject>Noise</subject><subject>On-line systems</subject><subject>Regression analysis</subject><subject>Statistics - Theory</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GOX</sourceid><recordid>eNotj01Lw0AURQdBsNT-AFcOuE6dz8zEndS2ioEuLG7Dm3RSpiSTOJNK--9NWlcXDvc97kHogZK50FKSZwgn9zunmmRzIrUiN2jCOKeJFozdoVmMB0IISxWTkk_Q98YnufMW5xaCd36P2wqPAAJ-O3toXAk1_jrH3jbxBS9PXeut790AV23Y274fb5zHn1A34PHK1b0N8R7dVlBHO_vPKdqultvFe5Jv1h-L1zwByURiJGTM7JQFbQyxkhqRgaBVqU1JJVRyZ1JVydSWqVYmE6WFTJZCMTtQkXE-RY_XtxfpoguugXAuRvniIj80nq6NLrQ_Rxv74tAegx82FYwSrShXQvA_2qxdQw</recordid><startdate>20180916</startdate><enddate>20180916</enddate><creator>Kozdoba, Mark</creator><creator>Marecek, Jakub</creator><creator>Tchrakian, Tigran</creator><creator>Mannor, Shie</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope><scope>AKY</scope><scope>AKZ</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20180916</creationdate><title>On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters</title><author>Kozdoba, Mark ; Marecek, Jakub ; Tchrakian, Tigran ; Mannor, Shie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a524-b5a92bd7ea8bb0e51b49a41fc8bc15af5db67f56ec687b94cea95c472e7f54933</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Algorithms</topic><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Decay</topic><topic>Dependence</topic><topic>Dynamical systems</topic><topic>Forecasting</topic><topic>Kalman filters</topic><topic>Machine learning</topic><topic>Mathematics - Optimization and Control</topic><topic>Mathematics - Statistics Theory</topic><topic>Noise</topic><topic>On-line systems</topic><topic>Regression analysis</topic><topic>Statistics - Theory</topic><toplevel>online_resources</toplevel><creatorcontrib>Kozdoba, Mark</creatorcontrib><creatorcontrib>Marecek, Jakub</creatorcontrib><creatorcontrib>Tchrakian, Tigran</creatorcontrib><creatorcontrib>Mannor, Shie</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kozdoba, Mark</au><au>Marecek, Jakub</au><au>Tchrakian, Tigran</au><au>Mannor, Shie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters</atitle><jtitle>arXiv.org</jtitle><date>2018-09-16</date><risdate>2018</risdate><eissn>2331-8422</eissn><abstract>Kalman filter is a key tool for time-series forecasting and analysis. We show that the dependence of a prediction of Kalman filter on the past is decaying exponentially, whenever the process noise is non-degenerate. Therefore, Kalman filter may be approximated by regression on a few recent observations. Surprisingly, we also show that having some process noise is essential for the exponential decay. With no process noise, it may happen that the forecast depends on all of the past uniformly, which makes forecasting more difficult. Based on this insight, we devise an on-line algorithm for improper learning of a linear dynamical system (LDS), which considers only a few most recent observations. We use our decay results to provide the first regret bounds w.r.t. to Kalman filters within learning an LDS. That is, we compare the results of our algorithm to the best, in hindsight, Kalman filter for a given signal. Also, the algorithm is practical: its per-update run-time is linear in the regression depth.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.1809.05870</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2018-09
issn 2331-8422
language eng
recordid cdi_arxiv_primary_1809_05870
source arXiv.org; Free E- Journals
subjects Algorithms
Computer Science - Artificial Intelligence
Computer Science - Learning
Decay
Dependence
Dynamical systems
Forecasting
Kalman filters
Machine learning
Mathematics - Optimization and Control
Mathematics - Statistics Theory
Noise
On-line systems
Regression analysis
Statistics - Theory
title On-Line Learning of Linear Dynamical Systems: Exponential Forgetting in Kalman Filters
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-26T02%3A32%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On-Line%20Learning%20of%20Linear%20Dynamical%20Systems:%20Exponential%20Forgetting%20in%20Kalman%20Filters&rft.jtitle=arXiv.org&rft.au=Kozdoba,%20Mark&rft.date=2018-09-16&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.1809.05870&rft_dat=%3Cproquest_arxiv%3E2108713744%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2108713744&rft_id=info:pmid/&rfr_iscdi=true