Dynamic Local Regret for Non-convex Online Forecasting

We consider online forecasting problems for non-convex machine learning models. Forecasting introduces several challenges such as (i) frequent updates are necessary to deal with concept drift issues since the dynamics of the environment change over time, and (ii) the state of the art models are non-...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Aydore, Sergul, Zhu, Tianhao, Foster, Dean
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Aydore, Sergul
Zhu, Tianhao
Foster, Dean
description We consider online forecasting problems for non-convex machine learning models. Forecasting introduces several challenges such as (i) frequent updates are necessary to deal with concept drift issues since the dynamics of the environment change over time, and (ii) the state of the art models are non-convex models. We address these challenges with a novel regret framework. Standard regret measures commonly do not consider both dynamic environment and non-convex models. We introduce a local regret for non-convex models in a dynamic environment. We present an update rule incurring a cost, according to our proposed local regret, which is sublinear in time T. Our update uses time-smoothed gradients. Using a real-world dataset we show that our time-smoothed approach yields several benefits when compared with state-of-the-art competitors: results are more stable against new data; training is more robust to hyperparameter selection; and our approach is more computationally efficient than the alternatives.
doi_str_mv 10.48550/arxiv.1910.07927
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1910_07927</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1910_07927</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-a31cd1bc2115bceeaeafedc6c396f9239b27c18f7b1538a240cf68a2f209bf2c3</originalsourceid><addsrcrecordid>eNotj81qAjEURrNxIdoHcNW8wNj8OMlkWWy1haFCcT_cXG-GwExSooi-fdV2deBbfJzD2EKK5aqpa_EC5RLPS-lug7BO2Skzb9cEY0TeZoSBf1Nf6MRDLvwrpwpzOtOF79IQE_FNLoRwPMXUz9kkwHCkp3_O2H7zvl9_VO1u-7l-bSsw1lagJR6kRyVl7ZEICAId0KB2JjilnVcWZROsl7VuQK0EBnNjUML5oFDP2PPf7UO8-ylxhHLt7gHdI0D_AgngQLw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Dynamic Local Regret for Non-convex Online Forecasting</title><source>arXiv.org</source><creator>Aydore, Sergul ; Zhu, Tianhao ; Foster, Dean</creator><creatorcontrib>Aydore, Sergul ; Zhu, Tianhao ; Foster, Dean</creatorcontrib><description>We consider online forecasting problems for non-convex machine learning models. Forecasting introduces several challenges such as (i) frequent updates are necessary to deal with concept drift issues since the dynamics of the environment change over time, and (ii) the state of the art models are non-convex models. We address these challenges with a novel regret framework. Standard regret measures commonly do not consider both dynamic environment and non-convex models. We introduce a local regret for non-convex models in a dynamic environment. We present an update rule incurring a cost, according to our proposed local regret, which is sublinear in time T. Our update uses time-smoothed gradients. Using a real-world dataset we show that our time-smoothed approach yields several benefits when compared with state-of-the-art competitors: results are more stable against new data; training is more robust to hyperparameter selection; and our approach is more computationally efficient than the alternatives.</description><identifier>DOI: 10.48550/arxiv.1910.07927</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1910.07927$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.07927$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Aydore, Sergul</creatorcontrib><creatorcontrib>Zhu, Tianhao</creatorcontrib><creatorcontrib>Foster, Dean</creatorcontrib><title>Dynamic Local Regret for Non-convex Online Forecasting</title><description>We consider online forecasting problems for non-convex machine learning models. Forecasting introduces several challenges such as (i) frequent updates are necessary to deal with concept drift issues since the dynamics of the environment change over time, and (ii) the state of the art models are non-convex models. We address these challenges with a novel regret framework. Standard regret measures commonly do not consider both dynamic environment and non-convex models. We introduce a local regret for non-convex models in a dynamic environment. We present an update rule incurring a cost, according to our proposed local regret, which is sublinear in time T. Our update uses time-smoothed gradients. Using a real-world dataset we show that our time-smoothed approach yields several benefits when compared with state-of-the-art competitors: results are more stable against new data; training is more robust to hyperparameter selection; and our approach is more computationally efficient than the alternatives.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qAjEURrNxIdoHcNW8wNj8OMlkWWy1haFCcT_cXG-GwExSooi-fdV2deBbfJzD2EKK5aqpa_EC5RLPS-lug7BO2Skzb9cEY0TeZoSBf1Nf6MRDLvwrpwpzOtOF79IQE_FNLoRwPMXUz9kkwHCkp3_O2H7zvl9_VO1u-7l-bSsw1lagJR6kRyVl7ZEICAId0KB2JjilnVcWZROsl7VuQK0EBnNjUML5oFDP2PPf7UO8-ylxhHLt7gHdI0D_AgngQLw</recordid><startdate>20191016</startdate><enddate>20191016</enddate><creator>Aydore, Sergul</creator><creator>Zhu, Tianhao</creator><creator>Foster, Dean</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20191016</creationdate><title>Dynamic Local Regret for Non-convex Online Forecasting</title><author>Aydore, Sergul ; Zhu, Tianhao ; Foster, Dean</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-a31cd1bc2115bceeaeafedc6c396f9239b27c18f7b1538a240cf68a2f209bf2c3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Aydore, Sergul</creatorcontrib><creatorcontrib>Zhu, Tianhao</creatorcontrib><creatorcontrib>Foster, Dean</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Aydore, Sergul</au><au>Zhu, Tianhao</au><au>Foster, Dean</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Dynamic Local Regret for Non-convex Online Forecasting</atitle><date>2019-10-16</date><risdate>2019</risdate><abstract>We consider online forecasting problems for non-convex machine learning models. Forecasting introduces several challenges such as (i) frequent updates are necessary to deal with concept drift issues since the dynamics of the environment change over time, and (ii) the state of the art models are non-convex models. We address these challenges with a novel regret framework. Standard regret measures commonly do not consider both dynamic environment and non-convex models. We introduce a local regret for non-convex models in a dynamic environment. We present an update rule incurring a cost, according to our proposed local regret, which is sublinear in time T. Our update uses time-smoothed gradients. Using a real-world dataset we show that our time-smoothed approach yields several benefits when compared with state-of-the-art competitors: results are more stable against new data; training is more robust to hyperparameter selection; and our approach is more computationally efficient than the alternatives.</abstract><doi>10.48550/arxiv.1910.07927</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1910.07927
ispartof
issn
language eng
recordid cdi_arxiv_primary_1910_07927
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Dynamic Local Regret for Non-convex Online Forecasting
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T18%3A02%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Dynamic%20Local%20Regret%20for%20Non-convex%20Online%20Forecasting&rft.au=Aydore,%20Sergul&rft.date=2019-10-16&rft_id=info:doi/10.48550/arxiv.1910.07927&rft_dat=%3Carxiv_GOX%3E1910_07927%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true