Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD
We provide sharp path-dependent generalization and excess risk guarantees for the full-batch Gradient Descent (GD) algorithm on smooth losses (possibly non-Lipschitz, possibly nonconvex). At the heart of our analysis is an upper bound on the generalization error, which implies that average output st...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-02 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Nikolakakis, Konstantinos E Haddadpour, Farzin Karbasi, Amin Kalogerias, Dionysios S |
description | We provide sharp path-dependent generalization and excess risk guarantees for the full-batch Gradient Descent (GD) algorithm on smooth losses (possibly non-Lipschitz, possibly nonconvex). At the heart of our analysis is an upper bound on the generalization error, which implies that average output stability and a bounded expected optimization error at termination lead to generalization. This result shows that a small generalization error occurs along the optimization path, and allows us to bypass Lipschitz or sub-Gaussian assumptions on the loss prevalent in previous works. For nonconvex, convex, and strongly convex losses, we show the explicit dependence of the generalization error in terms of the accumulated path-dependent optimization error, terminal optimization error, number of samples, and number of iterations. For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions. For smooth convex losses, we show that the generalization error is tighter than existing bounds for SGD (up to one order of error magnitude). Consequently the excess risk matches that of SGD for quadratically less iterations. Lastly, for strongly convex smooth losses, we show that full-batch GD achieves essentially the same excess risk rate as compared with the state of the art on SGD, but with an exponentially smaller number of iterations (logarithmic in the dataset size). |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2655914446</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2655914446</sourcerecordid><originalsourceid>FETCH-proquest_journals_26559144463</originalsourceid><addsrcrecordid>eNqNyk8LgjAYgPERBEn5HV7oLOg27c_RUjsEQXWXoRNnstneCeWnr0MfoNNzeH4z4lHGomDLKV0QH7ELw5AmGxrHzCOXVL6NruGsBqxa5aY93FphByikllb0ahJOGQ3ia7JXJRHhqvABqRl1jdAYC_nY90EqXNVCcVyReSN6lP6vS7LOs_vhFAzWPEeJruzMaPV3lTSJ413EOU_Yf-oDr6w9lQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2655914446</pqid></control><display><type>article</type><title>Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD</title><source>Free E- Journals</source><creator>Nikolakakis, Konstantinos E ; Haddadpour, Farzin ; Karbasi, Amin ; Kalogerias, Dionysios S</creator><creatorcontrib>Nikolakakis, Konstantinos E ; Haddadpour, Farzin ; Karbasi, Amin ; Kalogerias, Dionysios S</creatorcontrib><description>We provide sharp path-dependent generalization and excess risk guarantees for the full-batch Gradient Descent (GD) algorithm on smooth losses (possibly non-Lipschitz, possibly nonconvex). At the heart of our analysis is an upper bound on the generalization error, which implies that average output stability and a bounded expected optimization error at termination lead to generalization. This result shows that a small generalization error occurs along the optimization path, and allows us to bypass Lipschitz or sub-Gaussian assumptions on the loss prevalent in previous works. For nonconvex, convex, and strongly convex losses, we show the explicit dependence of the generalization error in terms of the accumulated path-dependent optimization error, terminal optimization error, number of samples, and number of iterations. For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions. For smooth convex losses, we show that the generalization error is tighter than existing bounds for SGD (up to one order of error magnitude). Consequently the excess risk matches that of SGD for quadratically less iterations. Lastly, for strongly convex smooth losses, we show that full-batch GD achieves essentially the same excess risk rate as compared with the state of the art on SGD, but with an exponentially smaller number of iterations (logarithmic in the dataset size).</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Entropy ; Error analysis ; Optimization ; Risk</subject><ispartof>arXiv.org, 2023-02</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Nikolakakis, Konstantinos E</creatorcontrib><creatorcontrib>Haddadpour, Farzin</creatorcontrib><creatorcontrib>Karbasi, Amin</creatorcontrib><creatorcontrib>Kalogerias, Dionysios S</creatorcontrib><title>Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD</title><title>arXiv.org</title><description>We provide sharp path-dependent generalization and excess risk guarantees for the full-batch Gradient Descent (GD) algorithm on smooth losses (possibly non-Lipschitz, possibly nonconvex). At the heart of our analysis is an upper bound on the generalization error, which implies that average output stability and a bounded expected optimization error at termination lead to generalization. This result shows that a small generalization error occurs along the optimization path, and allows us to bypass Lipschitz or sub-Gaussian assumptions on the loss prevalent in previous works. For nonconvex, convex, and strongly convex losses, we show the explicit dependence of the generalization error in terms of the accumulated path-dependent optimization error, terminal optimization error, number of samples, and number of iterations. For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions. For smooth convex losses, we show that the generalization error is tighter than existing bounds for SGD (up to one order of error magnitude). Consequently the excess risk matches that of SGD for quadratically less iterations. Lastly, for strongly convex smooth losses, we show that full-batch GD achieves essentially the same excess risk rate as compared with the state of the art on SGD, but with an exponentially smaller number of iterations (logarithmic in the dataset size).</description><subject>Algorithms</subject><subject>Entropy</subject><subject>Error analysis</subject><subject>Optimization</subject><subject>Risk</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyk8LgjAYgPERBEn5HV7oLOg27c_RUjsEQXWXoRNnstneCeWnr0MfoNNzeH4z4lHGomDLKV0QH7ELw5AmGxrHzCOXVL6NruGsBqxa5aY93FphByikllb0ahJOGQ3ia7JXJRHhqvABqRl1jdAYC_nY90EqXNVCcVyReSN6lP6vS7LOs_vhFAzWPEeJruzMaPV3lTSJ413EOU_Yf-oDr6w9lQ</recordid><startdate>20230209</startdate><enddate>20230209</enddate><creator>Nikolakakis, Konstantinos E</creator><creator>Haddadpour, Farzin</creator><creator>Karbasi, Amin</creator><creator>Kalogerias, Dionysios S</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230209</creationdate><title>Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD</title><author>Nikolakakis, Konstantinos E ; Haddadpour, Farzin ; Karbasi, Amin ; Kalogerias, Dionysios S</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_26559144463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Algorithms</topic><topic>Entropy</topic><topic>Error analysis</topic><topic>Optimization</topic><topic>Risk</topic><toplevel>online_resources</toplevel><creatorcontrib>Nikolakakis, Konstantinos E</creatorcontrib><creatorcontrib>Haddadpour, Farzin</creatorcontrib><creatorcontrib>Karbasi, Amin</creatorcontrib><creatorcontrib>Kalogerias, Dionysios S</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nikolakakis, Konstantinos E</au><au>Haddadpour, Farzin</au><au>Karbasi, Amin</au><au>Kalogerias, Dionysios S</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD</atitle><jtitle>arXiv.org</jtitle><date>2023-02-09</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>We provide sharp path-dependent generalization and excess risk guarantees for the full-batch Gradient Descent (GD) algorithm on smooth losses (possibly non-Lipschitz, possibly nonconvex). At the heart of our analysis is an upper bound on the generalization error, which implies that average output stability and a bounded expected optimization error at termination lead to generalization. This result shows that a small generalization error occurs along the optimization path, and allows us to bypass Lipschitz or sub-Gaussian assumptions on the loss prevalent in previous works. For nonconvex, convex, and strongly convex losses, we show the explicit dependence of the generalization error in terms of the accumulated path-dependent optimization error, terminal optimization error, number of samples, and number of iterations. For nonconvex smooth losses, we prove that full-batch GD efficiently generalizes close to any stationary point at termination, and recovers the generalization error guarantees of stochastic algorithms with fewer assumptions. For smooth convex losses, we show that the generalization error is tighter than existing bounds for SGD (up to one order of error magnitude). Consequently the excess risk matches that of SGD for quadratically less iterations. Lastly, for strongly convex smooth losses, we show that full-batch GD achieves essentially the same excess risk rate as compared with the state of the art on SGD, but with an exponentially smaller number of iterations (logarithmic in the dataset size).</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-02 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2655914446 |
source | Free E- Journals |
subjects | Algorithms Entropy Error analysis Optimization Risk |
title | Beyond Lipschitz: Sharp Generalization and Excess Risk Bounds for Full-Batch GD |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T17%3A43%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Beyond%20Lipschitz:%20Sharp%20Generalization%20and%20Excess%20Risk%20Bounds%20for%20Full-Batch%20GD&rft.jtitle=arXiv.org&rft.au=Nikolakakis,%20Konstantinos%20E&rft.date=2023-02-09&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2655914446%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2655914446&rft_id=info:pmid/&rfr_iscdi=true |