Kernel and Rich Regimes in Overparametrized Models

A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast t...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-02
Hauptverfasser: Woodworth, Blake, Gunasekar, Suriya, Savarese, Pedro, Moroshko, Edward, Golan, Itay, Lee, Jason, Soudry, Daniel, Srebro, Nathan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Woodworth, Blake
Gunasekar, Suriya
Savarese, Pedro
Moroshko, Edward
Golan, Itay
Lee, Jason
Soudry, Daniel
Srebro, Nathan
description A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2239956485</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2239956485</sourcerecordid><originalsourceid>FETCH-proquest_journals_22399564853</originalsourceid><addsrcrecordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw8k4tykvNUUjMS1EIykzOUAhKTc_MTS1WyMxT8C9LLSpILErMTS0pyqxKTVHwzU9JzSnmYWBNS8wpTuWF0twMym6uIc4eugVF-YWlqcUl8Vn5pUV5QKl4IyNjS0tTMxMLU2PiVAEAtQ4zpg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2239956485</pqid></control><display><type>article</type><title>Kernel and Rich Regimes in Overparametrized Models</title><source>Freely Accessible Journals</source><creator>Woodworth, Blake ; Gunasekar, Suriya ; Savarese, Pedro ; Moroshko, Edward ; Golan, Itay ; Lee, Jason ; Soudry, Daniel ; Srebro, Nathan</creator><creatorcontrib>Woodworth, Blake ; Gunasekar, Suriya ; Savarese, Pedro ; Moroshko, Edward ; Golan, Itay ; Lee, Jason ; Soudry, Daniel ; Srebro, Nathan</creatorcontrib><description>A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Kernels ; Multilayers ; Neural networks ; Norms ; Training</subject><ispartof>arXiv.org, 2020-02</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Woodworth, Blake</creatorcontrib><creatorcontrib>Gunasekar, Suriya</creatorcontrib><creatorcontrib>Savarese, Pedro</creatorcontrib><creatorcontrib>Moroshko, Edward</creatorcontrib><creatorcontrib>Golan, Itay</creatorcontrib><creatorcontrib>Lee, Jason</creatorcontrib><creatorcontrib>Soudry, Daniel</creatorcontrib><creatorcontrib>Srebro, Nathan</creatorcontrib><title>Kernel and Rich Regimes in Overparametrized Models</title><title>arXiv.org</title><description>A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.</description><subject>Kernels</subject><subject>Multilayers</subject><subject>Neural networks</subject><subject>Norms</subject><subject>Training</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNpjYuA0MjY21LUwMTLiYOAtLs4yMDAwMjM3MjU15mQw8k4tykvNUUjMS1EIykzOUAhKTc_MTS1WyMxT8C9LLSpILErMTS0pyqxKTVHwzU9JzSnmYWBNS8wpTuWF0twMym6uIc4eugVF-YWlqcUl8Vn5pUV5QKl4IyNjS0tTMxMLU2PiVAEAtQ4zpg</recordid><startdate>20200225</startdate><enddate>20200225</enddate><creator>Woodworth, Blake</creator><creator>Gunasekar, Suriya</creator><creator>Savarese, Pedro</creator><creator>Moroshko, Edward</creator><creator>Golan, Itay</creator><creator>Lee, Jason</creator><creator>Soudry, Daniel</creator><creator>Srebro, Nathan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200225</creationdate><title>Kernel and Rich Regimes in Overparametrized Models</title><author>Woodworth, Blake ; Gunasekar, Suriya ; Savarese, Pedro ; Moroshko, Edward ; Golan, Itay ; Lee, Jason ; Soudry, Daniel ; Srebro, Nathan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_22399564853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Kernels</topic><topic>Multilayers</topic><topic>Neural networks</topic><topic>Norms</topic><topic>Training</topic><toplevel>online_resources</toplevel><creatorcontrib>Woodworth, Blake</creatorcontrib><creatorcontrib>Gunasekar, Suriya</creatorcontrib><creatorcontrib>Savarese, Pedro</creatorcontrib><creatorcontrib>Moroshko, Edward</creatorcontrib><creatorcontrib>Golan, Itay</creatorcontrib><creatorcontrib>Lee, Jason</creatorcontrib><creatorcontrib>Soudry, Daniel</creatorcontrib><creatorcontrib>Srebro, Nathan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>Proquest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Woodworth, Blake</au><au>Gunasekar, Suriya</au><au>Savarese, Pedro</au><au>Moroshko, Edward</au><au>Golan, Itay</au><au>Lee, Jason</au><au>Soudry, Daniel</au><au>Srebro, Nathan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Kernel and Rich Regimes in Overparametrized Models</atitle><jtitle>arXiv.org</jtitle><date>2020-02-25</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>A recent line of work studies overparametrized neural networks in the "kernel regime," i.e. when the network behaves during training as a kernelized linear predictor, and thus training with gradient descent has the effect of finding the minimum RKHS norm solution. This stands in contrast to other studies which demonstrate how gradient descent on overparametrized multilayer networks can induce rich implicit biases that are not RKHS norms. Building on an observation by Chizat and Bach, we show how the scale of the initialization controls the transition between the "kernel" (aka lazy) and "rich" (aka active) regimes and affects generalization properties in multilayer homogeneous models. We provide a complete and detailed analysis for a simple two-layer model that already exhibits an interesting and meaningful transition between the kernel and rich regimes, and we demonstrate the transition for more complex matrix factorization models and multilayer non-linear networks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2239956485
source Freely Accessible Journals
subjects Kernels
Multilayers
Neural networks
Norms
Training
title Kernel and Rich Regimes in Overparametrized Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-14T06%3A25%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Kernel%20and%20Rich%20Regimes%20in%20Overparametrized%20Models&rft.jtitle=arXiv.org&rft.au=Woodworth,%20Blake&rft.date=2020-02-25&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2239956485%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2239956485&rft_id=info:pmid/&rfr_iscdi=true