On the Double Descent of Random Features Models Trained with SGD

We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Fanghui, Suykens, Johan A. K, Cevher, Volkan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Fanghui
Suykens, Johan A. K
Cevher, Volkan
description We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.
doi_str_mv 10.48550/arxiv.2110.06910
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2110_06910</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2110_06910</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-48e5ecbe6ac911d4ec6049e4467839df2978febc39f2dcebe80f4624bdfd7c623</originalsourceid><addsrcrecordid>eNotz8lqwzAUhWFtughpHiCr3hdwIsmyLO1aMkNKoPXeaLgiBscusjO9faaufjiLAx8hY0YnQmUZnZp4qU4Tzu4DlZrRAfncNdDvEebt0db3YOew6aEN8GMa3x5giaY_Ruzgu_VYd1BEUzXo4Vz1e_hdzd_JWzB1h6P_DkmxXBSzdbLdrTazr21iZE4ToTBDZ1EapxnzAp2kQqMQMlep9oHrXAW0LtWBe4cWFQ1CcmF98LmTPB2Sj9ftk1D-xepg4rV8UMonJb0BZgpDWA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On the Double Descent of Random Features Models Trained with SGD</title><source>arXiv.org</source><creator>Liu, Fanghui ; Suykens, Johan A. K ; Cevher, Volkan</creator><creatorcontrib>Liu, Fanghui ; Suykens, Johan A. K ; Cevher, Volkan</creatorcontrib><description>We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.</description><identifier>DOI: 10.48550/arxiv.2110.06910</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2021-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2110.06910$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2110.06910$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Fanghui</creatorcontrib><creatorcontrib>Suykens, Johan A. K</creatorcontrib><creatorcontrib>Cevher, Volkan</creatorcontrib><title>On the Double Descent of Random Features Models Trained with SGD</title><description>We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz8lqwzAUhWFtughpHiCr3hdwIsmyLO1aMkNKoPXeaLgiBscusjO9faaufjiLAx8hY0YnQmUZnZp4qU4Tzu4DlZrRAfncNdDvEebt0db3YOew6aEN8GMa3x5giaY_Ruzgu_VYd1BEUzXo4Vz1e_hdzd_JWzB1h6P_DkmxXBSzdbLdrTazr21iZE4ToTBDZ1EapxnzAp2kQqMQMlep9oHrXAW0LtWBe4cWFQ1CcmF98LmTPB2Sj9ftk1D-xepg4rV8UMonJb0BZgpDWA</recordid><startdate>20211013</startdate><enddate>20211013</enddate><creator>Liu, Fanghui</creator><creator>Suykens, Johan A. K</creator><creator>Cevher, Volkan</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20211013</creationdate><title>On the Double Descent of Random Features Models Trained with SGD</title><author>Liu, Fanghui ; Suykens, Johan A. K ; Cevher, Volkan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-48e5ecbe6ac911d4ec6049e4467839df2978febc39f2dcebe80f4624bdfd7c623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Fanghui</creatorcontrib><creatorcontrib>Suykens, Johan A. K</creatorcontrib><creatorcontrib>Cevher, Volkan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Fanghui</au><au>Suykens, Johan A. K</au><au>Cevher, Volkan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On the Double Descent of Random Features Models Trained with SGD</atitle><date>2021-10-13</date><risdate>2021</risdate><abstract>We study generalization properties of random features (RF) regression in high dimensions optimized by stochastic gradient descent (SGD) in under-/over-parameterized regime. In this work, we derive precise non-asymptotic error bounds of RF regression under both constant and polynomial-decay step-size SGD setting, and observe the double descent phenomenon both theoretically and empirically. Our analysis shows how to cope with multiple randomness sources of initialization, label noise, and data sampling (as well as stochastic gradients) with no closed-form solution, and also goes beyond the commonly-used Gaussian/spherical data assumption. Our theoretical results demonstrate that, with SGD training, RF regression still generalizes well for interpolation learning, and is able to characterize the double descent behavior by the unimodality of variance and monotonic decrease of bias. Besides, we also prove that the constant step-size SGD setting incurs no loss in convergence rate when compared to the exact minimum-norm interpolator, as a theoretical justification of using SGD in practice.</abstract><doi>10.48550/arxiv.2110.06910</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2110.06910
ispartof
issn
language eng
recordid cdi_arxiv_primary_2110_06910
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title On the Double Descent of Random Features Models Trained with SGD
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-15T05%3A46%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20the%20Double%20Descent%20of%20Random%20Features%20Models%20Trained%20with%20SGD&rft.au=Liu,%20Fanghui&rft.date=2021-10-13&rft_id=info:doi/10.48550/arxiv.2110.06910&rft_dat=%3Carxiv_GOX%3E2110_06910%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true