Accelerated Learning with Robustness to Adversarial Regressors

High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gaudio, Joseph E, Annaswamy, Anuradha M, Moreu, José M, Bolender, Michael A, Gibson, Travis E
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gaudio, Joseph E
Annaswamy, Anuradha M
Moreu, José M
Bolender, Michael A
Gibson, Travis E
description High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors, as is commonplace in control theory. In this paper, we propose a new discrete time algorithm which 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an $\epsilon$ sub-optimal point in at most $\tilde{\mathcal{O}}(1/\sqrt{\epsilon})$ iterations when regressors are constant - matching lower bounds due to Nesterov of $\Omega(1/\sqrt{\epsilon})$, up to a $\log(1/\epsilon)$ factor and provides guaranteed bounds for stability when regressors are time-varying. We provide numerical experiments for a variant of Nesterov's provably hard convex optimization problem with time-varying regressors, as well as the problem of recovering an image with a time-varying blur and noise using streaming data.
doi_str_mv 10.48550/arxiv.2005.01529
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2005_01529</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2005_01529</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-5fce4562ec7bfadab1533b37480e2bab9c7a6a71079d1d38f36af0ec697675b83</originalsourceid><addsrcrecordid>eNotz81KxDAUhuFsXMjoBbgyN9CaNE3SbIQy-AcFYZh9OUlOxkBt5aSOevfq6OqDd_HBw9iVFHXbaS1ugD7zsW6E0LWQunHn7LYPASckWDHyAYHmPB_4R15f-G7x72WdsRS-LryPR6QClGHiOzzQT16oXLCzBFPBy__dsP393X77WA3PD0_bfqjAWFfpFLDVpsFgfYIIXmqlvLJtJ7Dx4F2wYMBKYV2UUXVJGUgCg3HWWO07tWHXf7cnwPhG-RXoa_yFjCeI-gbpAUQD</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Accelerated Learning with Robustness to Adversarial Regressors</title><source>arXiv.org</source><creator>Gaudio, Joseph E ; Annaswamy, Anuradha M ; Moreu, José M ; Bolender, Michael A ; Gibson, Travis E</creator><creatorcontrib>Gaudio, Joseph E ; Annaswamy, Anuradha M ; Moreu, José M ; Bolender, Michael A ; Gibson, Travis E</creatorcontrib><description>High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors, as is commonplace in control theory. In this paper, we propose a new discrete time algorithm which 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an $\epsilon$ sub-optimal point in at most $\tilde{\mathcal{O}}(1/\sqrt{\epsilon})$ iterations when regressors are constant - matching lower bounds due to Nesterov of $\Omega(1/\sqrt{\epsilon})$, up to a $\log(1/\epsilon)$ factor and provides guaranteed bounds for stability when regressors are time-varying. We provide numerical experiments for a variant of Nesterov's provably hard convex optimization problem with time-varying regressors, as well as the problem of recovering an image with a time-varying blur and noise using streaming data.</description><identifier>DOI: 10.48550/arxiv.2005.01529</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Systems and Control ; Mathematics - Optimization and Control</subject><creationdate>2020-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2005.01529$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2005.01529$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gaudio, Joseph E</creatorcontrib><creatorcontrib>Annaswamy, Anuradha M</creatorcontrib><creatorcontrib>Moreu, José M</creatorcontrib><creatorcontrib>Bolender, Michael A</creatorcontrib><creatorcontrib>Gibson, Travis E</creatorcontrib><title>Accelerated Learning with Robustness to Adversarial Regressors</title><description>High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors, as is commonplace in control theory. In this paper, we propose a new discrete time algorithm which 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an $\epsilon$ sub-optimal point in at most $\tilde{\mathcal{O}}(1/\sqrt{\epsilon})$ iterations when regressors are constant - matching lower bounds due to Nesterov of $\Omega(1/\sqrt{\epsilon})$, up to a $\log(1/\epsilon)$ factor and provides guaranteed bounds for stability when regressors are time-varying. We provide numerical experiments for a variant of Nesterov's provably hard convex optimization problem with time-varying regressors, as well as the problem of recovering an image with a time-varying blur and noise using streaming data.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Systems and Control</subject><subject>Mathematics - Optimization and Control</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz81KxDAUhuFsXMjoBbgyN9CaNE3SbIQy-AcFYZh9OUlOxkBt5aSOevfq6OqDd_HBw9iVFHXbaS1ugD7zsW6E0LWQunHn7LYPASckWDHyAYHmPB_4R15f-G7x72WdsRS-LryPR6QClGHiOzzQT16oXLCzBFPBy__dsP393X77WA3PD0_bfqjAWFfpFLDVpsFgfYIIXmqlvLJtJ7Dx4F2wYMBKYV2UUXVJGUgCg3HWWO07tWHXf7cnwPhG-RXoa_yFjCeI-gbpAUQD</recordid><startdate>20200504</startdate><enddate>20200504</enddate><creator>Gaudio, Joseph E</creator><creator>Annaswamy, Anuradha M</creator><creator>Moreu, José M</creator><creator>Bolender, Michael A</creator><creator>Gibson, Travis E</creator><scope>AKY</scope><scope>AKZ</scope><scope>GOX</scope></search><sort><creationdate>20200504</creationdate><title>Accelerated Learning with Robustness to Adversarial Regressors</title><author>Gaudio, Joseph E ; Annaswamy, Anuradha M ; Moreu, José M ; Bolender, Michael A ; Gibson, Travis E</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-5fce4562ec7bfadab1533b37480e2bab9c7a6a71079d1d38f36af0ec697675b83</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Systems and Control</topic><topic>Mathematics - Optimization and Control</topic><toplevel>online_resources</toplevel><creatorcontrib>Gaudio, Joseph E</creatorcontrib><creatorcontrib>Annaswamy, Anuradha M</creatorcontrib><creatorcontrib>Moreu, José M</creatorcontrib><creatorcontrib>Bolender, Michael A</creatorcontrib><creatorcontrib>Gibson, Travis E</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Mathematics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gaudio, Joseph E</au><au>Annaswamy, Anuradha M</au><au>Moreu, José M</au><au>Bolender, Michael A</au><au>Gibson, Travis E</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Accelerated Learning with Robustness to Adversarial Regressors</atitle><date>2020-05-04</date><risdate>2020</risdate><abstract>High order momentum-based parameter update algorithms have seen widespread applications in training machine learning models. Recently, connections with variational approaches have led to the derivation of new learning algorithms with accelerated learning guarantees. Such methods however, have only considered the case of static regressors. There is a significant need for parameter update algorithms which can be proven stable in the presence of adversarial time-varying regressors, as is commonplace in control theory. In this paper, we propose a new discrete time algorithm which 1) provides stability and asymptotic convergence guarantees in the presence of adversarial regressors by leveraging insights from adaptive control theory and 2) provides non-asymptotic accelerated learning guarantees leveraging insights from convex optimization. In particular, our algorithm reaches an $\epsilon$ sub-optimal point in at most $\tilde{\mathcal{O}}(1/\sqrt{\epsilon})$ iterations when regressors are constant - matching lower bounds due to Nesterov of $\Omega(1/\sqrt{\epsilon})$, up to a $\log(1/\epsilon)$ factor and provides guaranteed bounds for stability when regressors are time-varying. We provide numerical experiments for a variant of Nesterov's provably hard convex optimization problem with time-varying regressors, as well as the problem of recovering an image with a time-varying blur and noise using streaming data.</abstract><doi>10.48550/arxiv.2005.01529</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2005.01529
ispartof
issn
language eng
recordid cdi_arxiv_primary_2005_01529
source arXiv.org
subjects Computer Science - Learning
Computer Science - Systems and Control
Mathematics - Optimization and Control
title Accelerated Learning with Robustness to Adversarial Regressors
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T00%3A00%3A13IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Accelerated%20Learning%20with%20Robustness%20to%20Adversarial%20Regressors&rft.au=Gaudio,%20Joseph%20E&rft.date=2020-05-04&rft_id=info:doi/10.48550/arxiv.2005.01529&rft_dat=%3Carxiv_GOX%3E2005_01529%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true