Accelerated Gradient Algorithms with Adaptive Subspace Search for Instance-Faster Optimization

Gradient-based minimax optimal algorithms have greatly promoted the development of continuous optimization and machine learning. One seminal work due to Yurii Nesterov [Nes83a] established \(\tilde{\mathcal{O}}(\sqrt{L/\mu})\) gradient complexity for minimizing an \(L\)-smooth \(\mu\)-strongly conve...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2023-12
Hauptverfasser: Liu, Yuanshi, Zhao, Hanzhen, Xu, Yang, Pengyun Yue, Fang, Cong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Liu, Yuanshi
Zhao, Hanzhen
Xu, Yang
Pengyun Yue
Fang, Cong
description Gradient-based minimax optimal algorithms have greatly promoted the development of continuous optimization and machine learning. One seminal work due to Yurii Nesterov [Nes83a] established \(\tilde{\mathcal{O}}(\sqrt{L/\mu})\) gradient complexity for minimizing an \(L\)-smooth \(\mu\)-strongly convex objective. However, an ideal algorithm would adapt to the explicit complexity of a particular objective function and incur faster rates for simpler problems, triggering our reconsideration of two defeats of existing optimization modeling and analysis. (i) The worst-case optimality is neither the instance optimality nor such one in reality. (ii) Traditional \(L\)-smoothness condition may not be the primary abstraction/characterization for modern practical problems. In this paper, we open up a new way to design and analyze gradient-based algorithms with direct applications in machine learning, including linear regression and beyond. We introduce two factors \((\alpha, \tau_{\alpha})\) to refine the description of the degenerated condition of the optimization problems based on the observation that the singular values of Hessian often drop sharply. We design adaptive algorithms that solve simpler problems without pre-known knowledge with reduced gradient or analogous oracle accesses. The algorithms also improve the state-of-art complexities for several problems in machine learning, thereby solving the open problem of how to design faster algorithms in light of the known complexity lower bounds. Specially, with the \(\mathcal{O}(1)\)-nuclear norm bounded, we achieve an optimal \(\tilde{\mathcal{O}}(\mu^{-1/3})\) (v.s. \(\tilde{\mathcal{O}}(\mu^{-1/2})\)) gradient complexity for linear regression. We hope this work could invoke the rethinking for understanding the difficulty of modern problems in optimization.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2899319943</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2899319943</sourcerecordid><originalsourceid>FETCH-proquest_journals_28993199433</originalsourceid><addsrcrecordid>eNqNikELgjAYQEcQJOV_GHQWdNPSo0RWpw51Tr7mZ050s20W9Ovz0A_o9B68NyMe4zwK0pixBfGtbcMwZJstSxLukVsuBHZowGFFDwYqicrRvHtoI13TW_qeQPMKBidfSC_j3Q4gJkEwoqG1NvSkrAMlMCjAOjT0PK29_ICTWq3IvIbOov_jkqyL_XV3DAajnyNaV7Z6NGpKJUuzjEdZFnP-3_UFeklElQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2899319943</pqid></control><display><type>article</type><title>Accelerated Gradient Algorithms with Adaptive Subspace Search for Instance-Faster Optimization</title><source>Free E- Journals</source><creator>Liu, Yuanshi ; Zhao, Hanzhen ; Xu, Yang ; Pengyun Yue ; Fang, Cong</creator><creatorcontrib>Liu, Yuanshi ; Zhao, Hanzhen ; Xu, Yang ; Pengyun Yue ; Fang, Cong</creatorcontrib><description>Gradient-based minimax optimal algorithms have greatly promoted the development of continuous optimization and machine learning. One seminal work due to Yurii Nesterov [Nes83a] established \(\tilde{\mathcal{O}}(\sqrt{L/\mu})\) gradient complexity for minimizing an \(L\)-smooth \(\mu\)-strongly convex objective. However, an ideal algorithm would adapt to the explicit complexity of a particular objective function and incur faster rates for simpler problems, triggering our reconsideration of two defeats of existing optimization modeling and analysis. (i) The worst-case optimality is neither the instance optimality nor such one in reality. (ii) Traditional \(L\)-smoothness condition may not be the primary abstraction/characterization for modern practical problems. In this paper, we open up a new way to design and analyze gradient-based algorithms with direct applications in machine learning, including linear regression and beyond. We introduce two factors \((\alpha, \tau_{\alpha})\) to refine the description of the degenerated condition of the optimization problems based on the observation that the singular values of Hessian often drop sharply. We design adaptive algorithms that solve simpler problems without pre-known knowledge with reduced gradient or analogous oracle accesses. The algorithms also improve the state-of-art complexities for several problems in machine learning, thereby solving the open problem of how to design faster algorithms in light of the known complexity lower bounds. Specially, with the \(\mathcal{O}(1)\)-nuclear norm bounded, we achieve an optimal \(\tilde{\mathcal{O}}(\mu^{-1/3})\) (v.s. \(\tilde{\mathcal{O}}(\mu^{-1/2})\)) gradient complexity for linear regression. We hope this work could invoke the rethinking for understanding the difficulty of modern problems in optimization.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Adaptive algorithms ; Algorithms ; Complexity ; Lower bounds ; Machine learning ; Minimax technique ; Optimization ; Regression analysis ; Smoothness</subject><ispartof>arXiv.org, 2023-12</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Liu, Yuanshi</creatorcontrib><creatorcontrib>Zhao, Hanzhen</creatorcontrib><creatorcontrib>Xu, Yang</creatorcontrib><creatorcontrib>Pengyun Yue</creatorcontrib><creatorcontrib>Fang, Cong</creatorcontrib><title>Accelerated Gradient Algorithms with Adaptive Subspace Search for Instance-Faster Optimization</title><title>arXiv.org</title><description>Gradient-based minimax optimal algorithms have greatly promoted the development of continuous optimization and machine learning. One seminal work due to Yurii Nesterov [Nes83a] established \(\tilde{\mathcal{O}}(\sqrt{L/\mu})\) gradient complexity for minimizing an \(L\)-smooth \(\mu\)-strongly convex objective. However, an ideal algorithm would adapt to the explicit complexity of a particular objective function and incur faster rates for simpler problems, triggering our reconsideration of two defeats of existing optimization modeling and analysis. (i) The worst-case optimality is neither the instance optimality nor such one in reality. (ii) Traditional \(L\)-smoothness condition may not be the primary abstraction/characterization for modern practical problems. In this paper, we open up a new way to design and analyze gradient-based algorithms with direct applications in machine learning, including linear regression and beyond. We introduce two factors \((\alpha, \tau_{\alpha})\) to refine the description of the degenerated condition of the optimization problems based on the observation that the singular values of Hessian often drop sharply. We design adaptive algorithms that solve simpler problems without pre-known knowledge with reduced gradient or analogous oracle accesses. The algorithms also improve the state-of-art complexities for several problems in machine learning, thereby solving the open problem of how to design faster algorithms in light of the known complexity lower bounds. Specially, with the \(\mathcal{O}(1)\)-nuclear norm bounded, we achieve an optimal \(\tilde{\mathcal{O}}(\mu^{-1/3})\) (v.s. \(\tilde{\mathcal{O}}(\mu^{-1/2})\)) gradient complexity for linear regression. We hope this work could invoke the rethinking for understanding the difficulty of modern problems in optimization.</description><subject>Adaptive algorithms</subject><subject>Algorithms</subject><subject>Complexity</subject><subject>Lower bounds</subject><subject>Machine learning</subject><subject>Minimax technique</subject><subject>Optimization</subject><subject>Regression analysis</subject><subject>Smoothness</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikELgjAYQEcQJOV_GHQWdNPSo0RWpw51Tr7mZ050s20W9Ovz0A_o9B68NyMe4zwK0pixBfGtbcMwZJstSxLukVsuBHZowGFFDwYqicrRvHtoI13TW_qeQPMKBidfSC_j3Q4gJkEwoqG1NvSkrAMlMCjAOjT0PK29_ICTWq3IvIbOov_jkqyL_XV3DAajnyNaV7Z6NGpKJUuzjEdZFnP-3_UFeklElQ</recordid><startdate>20231206</startdate><enddate>20231206</enddate><creator>Liu, Yuanshi</creator><creator>Zhao, Hanzhen</creator><creator>Xu, Yang</creator><creator>Pengyun Yue</creator><creator>Fang, Cong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20231206</creationdate><title>Accelerated Gradient Algorithms with Adaptive Subspace Search for Instance-Faster Optimization</title><author>Liu, Yuanshi ; Zhao, Hanzhen ; Xu, Yang ; Pengyun Yue ; Fang, Cong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28993199433</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptive algorithms</topic><topic>Algorithms</topic><topic>Complexity</topic><topic>Lower bounds</topic><topic>Machine learning</topic><topic>Minimax technique</topic><topic>Optimization</topic><topic>Regression analysis</topic><topic>Smoothness</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Yuanshi</creatorcontrib><creatorcontrib>Zhao, Hanzhen</creatorcontrib><creatorcontrib>Xu, Yang</creatorcontrib><creatorcontrib>Pengyun Yue</creatorcontrib><creatorcontrib>Fang, Cong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Yuanshi</au><au>Zhao, Hanzhen</au><au>Xu, Yang</au><au>Pengyun Yue</au><au>Fang, Cong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Accelerated Gradient Algorithms with Adaptive Subspace Search for Instance-Faster Optimization</atitle><jtitle>arXiv.org</jtitle><date>2023-12-06</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Gradient-based minimax optimal algorithms have greatly promoted the development of continuous optimization and machine learning. One seminal work due to Yurii Nesterov [Nes83a] established \(\tilde{\mathcal{O}}(\sqrt{L/\mu})\) gradient complexity for minimizing an \(L\)-smooth \(\mu\)-strongly convex objective. However, an ideal algorithm would adapt to the explicit complexity of a particular objective function and incur faster rates for simpler problems, triggering our reconsideration of two defeats of existing optimization modeling and analysis. (i) The worst-case optimality is neither the instance optimality nor such one in reality. (ii) Traditional \(L\)-smoothness condition may not be the primary abstraction/characterization for modern practical problems. In this paper, we open up a new way to design and analyze gradient-based algorithms with direct applications in machine learning, including linear regression and beyond. We introduce two factors \((\alpha, \tau_{\alpha})\) to refine the description of the degenerated condition of the optimization problems based on the observation that the singular values of Hessian often drop sharply. We design adaptive algorithms that solve simpler problems without pre-known knowledge with reduced gradient or analogous oracle accesses. The algorithms also improve the state-of-art complexities for several problems in machine learning, thereby solving the open problem of how to design faster algorithms in light of the known complexity lower bounds. Specially, with the \(\mathcal{O}(1)\)-nuclear norm bounded, we achieve an optimal \(\tilde{\mathcal{O}}(\mu^{-1/3})\) (v.s. \(\tilde{\mathcal{O}}(\mu^{-1/2})\)) gradient complexity for linear regression. We hope this work could invoke the rethinking for understanding the difficulty of modern problems in optimization.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2023-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2899319943
source Free E- Journals
subjects Adaptive algorithms
Algorithms
Complexity
Lower bounds
Machine learning
Minimax technique
Optimization
Regression analysis
Smoothness
title Accelerated Gradient Algorithms with Adaptive Subspace Search for Instance-Faster Optimization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T01%3A31%3A53IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Accelerated%20Gradient%20Algorithms%20with%20Adaptive%20Subspace%20Search%20for%20Instance-Faster%20Optimization&rft.jtitle=arXiv.org&rft.au=Liu,%20Yuanshi&rft.date=2023-12-06&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2899319943%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2899319943&rft_id=info:pmid/&rfr_iscdi=true