Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning
While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-10 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Kunin, Daniel Raventós, Allan Dominé, Clémentine Chen, Feng Klindt, David Saxe, Andrew Ganguli, Surya |
description | While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing lazy regime. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs from balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3066577264</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3066577264</sourcerecordid><originalsourceid>FETCH-proquest_journals_30665772643</originalsourceid><addsrcrecordid>eNqNi0kKwkAQAAdBUNQ_NHgOxJks4lVcHuBd2qSjHceZOIuKrzegD_BUh6oaiLFUapEsMylHYuZ9m6apLEqZ52os6h0FcFxd4B65uq6AXlgF8FbHwNZ4cPQg1HCxT4jmhBpNRTWw4cCo-Y3fqnP2ZgOBw45raAhDdASa0Bk256kYNqg9zX6ciPl2c1jvk367R_Lh2NroTK-OKi2KvCxlkan_qg-Xa0du</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3066577264</pqid></control><display><type>article</type><title>Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning</title><source>Free E- Journals</source><creator>Kunin, Daniel ; Raventós, Allan ; Dominé, Clémentine ; Chen, Feng ; Klindt, David ; Saxe, Andrew ; Ganguli, Surya</creator><creatorcontrib>Kunin, Daniel ; Raventós, Allan ; Dominé, Clémentine ; Chen, Feng ; Klindt, David ; Saxe, Andrew ; Ganguli, Surya</creatorcontrib><description>While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing lazy regime. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs from balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Complexity ; Exact solutions ; Function space ; Machine learning ; Mathematical analysis ; Networks ; Parameter modification</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Kunin, Daniel</creatorcontrib><creatorcontrib>Raventós, Allan</creatorcontrib><creatorcontrib>Dominé, Clémentine</creatorcontrib><creatorcontrib>Chen, Feng</creatorcontrib><creatorcontrib>Klindt, David</creatorcontrib><creatorcontrib>Saxe, Andrew</creatorcontrib><creatorcontrib>Ganguli, Surya</creatorcontrib><title>Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning</title><title>arXiv.org</title><description>While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing lazy regime. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs from balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning.</description><subject>Complexity</subject><subject>Exact solutions</subject><subject>Function space</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Networks</subject><subject>Parameter modification</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi0kKwkAQAAdBUNQ_NHgOxJks4lVcHuBd2qSjHceZOIuKrzegD_BUh6oaiLFUapEsMylHYuZ9m6apLEqZ52os6h0FcFxd4B65uq6AXlgF8FbHwNZ4cPQg1HCxT4jmhBpNRTWw4cCo-Y3fqnP2ZgOBw45raAhDdASa0Bk256kYNqg9zX6ciPl2c1jvk367R_Lh2NroTK-OKi2KvCxlkan_qg-Xa0du</recordid><startdate>20241012</startdate><enddate>20241012</enddate><creator>Kunin, Daniel</creator><creator>Raventós, Allan</creator><creator>Dominé, Clémentine</creator><creator>Chen, Feng</creator><creator>Klindt, David</creator><creator>Saxe, Andrew</creator><creator>Ganguli, Surya</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241012</creationdate><title>Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning</title><author>Kunin, Daniel ; Raventós, Allan ; Dominé, Clémentine ; Chen, Feng ; Klindt, David ; Saxe, Andrew ; Ganguli, Surya</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30665772643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Complexity</topic><topic>Exact solutions</topic><topic>Function space</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Networks</topic><topic>Parameter modification</topic><toplevel>online_resources</toplevel><creatorcontrib>Kunin, Daniel</creatorcontrib><creatorcontrib>Raventós, Allan</creatorcontrib><creatorcontrib>Dominé, Clémentine</creatorcontrib><creatorcontrib>Chen, Feng</creatorcontrib><creatorcontrib>Klindt, David</creatorcontrib><creatorcontrib>Saxe, Andrew</creatorcontrib><creatorcontrib>Ganguli, Surya</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kunin, Daniel</au><au>Raventós, Allan</au><au>Dominé, Clémentine</au><au>Chen, Feng</au><au>Klindt, David</au><au>Saxe, Andrew</au><au>Ganguli, Surya</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning</atitle><jtitle>arXiv.org</jtitle><date>2024-10-12</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>While the impressive performance of modern neural networks is often attributed to their capacity to efficiently extract task-relevant features from data, the mechanisms underlying this rich feature learning regime remain elusive, with much of our theoretical understanding stemming from the opposing lazy regime. In this work, we derive exact solutions to a minimal model that transitions between lazy and rich learning, precisely elucidating how unbalanced layer-specific initialization variances and learning rates determine the degree of feature learning. Our analysis reveals that they conspire to influence the learning regime through a set of conserved quantities that constrain and modify the geometry of learning trajectories in parameter and function space. We extend our analysis to more complex linear models with multiple neurons, outputs, and layers and to shallow nonlinear networks with piecewise linear activation functions. In linear networks, rapid feature learning only occurs from balanced initializations, where all layers learn at similar speeds. While in nonlinear networks, unbalanced initializations that promote faster learning in earlier layers can accelerate rich learning. Through a series of experiments, we provide evidence that this unbalanced rich regime drives feature learning in deep finite-width networks, promotes interpretability of early layers in CNNs, reduces the sample complexity of learning hierarchical data, and decreases the time to grokking in modular arithmetic. Our theory motivates further exploration of unbalanced initializations to enhance efficient feature learning.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3066577264 |
source | Free E- Journals |
subjects | Complexity Exact solutions Function space Machine learning Mathematical analysis Networks Parameter modification |
title | Get rich quick: exact solutions reveal how unbalanced initializations promote rapid feature learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T23%3A50%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Get%20rich%20quick:%20exact%20solutions%20reveal%20how%20unbalanced%20initializations%20promote%20rapid%20feature%20learning&rft.jtitle=arXiv.org&rft.au=Kunin,%20Daniel&rft.date=2024-10-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3066577264%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3066577264&rft_id=info:pmid/&rfr_iscdi=true |