Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks
In this paper, we investigate a two-layer fully connected neural network of the form f (X) = 1/√d1 a⊤σ (W X), where X ∈ d0 × n is a deterministic data matrix, W ∈ Rd1 × d0 and a ∈ Rd1 are random Gaussian weights, and σ is a nonlinear activation function. We study the limiting spectral distributions...
Gespeichert in:
Veröffentlicht in: | The Annals of applied probability 2024-04, Vol.34 (2), p.1896 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | 2 |
container_start_page | 1896 |
container_title | The Annals of applied probability |
container_volume | 34 |
creator | Wang, Zhichao Zhu, Yizhe |
description | In this paper, we investigate a two-layer fully connected neural network of the form f (X) = 1/√d1 a⊤σ (W X), where X ∈ d0 × n is a deterministic data matrix, W ∈ Rd1 × d0 and a ∈ Rd1 are random Gaussian weights, and σ is a nonlinear activation function. We study the limiting spectral distributions of two empirical kernel matrices associated with f (X): the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear-width regime (d1 ≍ n). We focus on the ultra-wide regime, where the width d1 of the first layer is much larger than the sample size n. Under appropriate assumptions on X and σ, a deformed semicircle law emerges as d 1 / n → ∞ and n → ∞ . We first prove this limiting law for generalized sample covariance matrices with some dependency. To specify it for our neural network model, we provide a nonlinear Hanson–Wright inequality suitable for neural networks with random weights and Lipschitz activation functions. We also demonstrate nonasymptotic concentrations of the empirical CK and NTK around their limiting kernels in the spectral norm, along with lower bounds on their smallest eigenvalues. As an application, we show that random feature regression induced by the empirical kernel achieves the same asymptotic performance as its limiting kernel regression under the ultra-wide regime. This allows us to calculate the asymptotic training and test errors for random feature regression using the corresponding kernel regression. |
doi_str_mv | 10.1214/23-AAP2010 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_3055130497</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3055130497</sourcerecordid><originalsourceid>FETCH-LOGICAL-c218t-8b95d94dbc4ae6dca78fff4517e17cc29a9d0833bb63e8e3c49ba969b7a488f73</originalsourceid><addsrcrecordid>eNotkE1LAzEYhIMoWKsXf0HAmxDN126SY6lWhYIe9Byy2TewdTepyS7Ff-9Ke5rDPMwwg9Atow-MM_nIBVmtPjhl9AwtOKs10Uqoc7RgtKKkYrW8RFel7CilRhq1QPAEIeUBWlxg6HyXfQ-4dwfsYot9ih7imN3YpYhTwDHFvovgMs6znwY8uDF3HgqeQ_DUzyg5dC3gCFN2_SzjIeXvco0ugusL3Jx0ib42z5_rV7J9f3lbr7bEc6ZHohtTtUa2jZcO6tY7pUMIsmIKmPKeG2daqoVomlqABuGlaZypTaOc1DoosUR3x9x9Tj8TlNHu0pTjXGkFrSom6Dx6pu6PlM-plAzB7nM3uPxrGbX_N1ou7OlG8QclN2bM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3055130497</pqid></control><display><type>article</type><title>Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks</title><source>Project Euclid Complete</source><creator>Wang, Zhichao ; Zhu, Yizhe</creator><creatorcontrib>Wang, Zhichao ; Zhu, Yizhe</creatorcontrib><description>In this paper, we investigate a two-layer fully connected neural network of the form f (X) = 1/√d1 a⊤σ (W X), where X ∈ d0 × n is a deterministic data matrix, W ∈ Rd1 × d0 and a ∈ Rd1 are random Gaussian weights, and σ is a nonlinear activation function. We study the limiting spectral distributions of two empirical kernel matrices associated with f (X): the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear-width regime (d1 ≍ n). We focus on the ultra-wide regime, where the width d1 of the first layer is much larger than the sample size n. Under appropriate assumptions on X and σ, a deformed semicircle law emerges as d 1 / n → ∞ and n → ∞ . We first prove this limiting law for generalized sample covariance matrices with some dependency. To specify it for our neural network model, we provide a nonlinear Hanson–Wright inequality suitable for neural networks with random weights and Lipschitz activation functions. We also demonstrate nonasymptotic concentrations of the empirical CK and NTK around their limiting kernels in the spectral norm, along with lower bounds on their smallest eigenvalues. As an application, we show that random feature regression induced by the empirical kernel achieves the same asymptotic performance as its limiting kernel regression under the ultra-wide regime. This allows us to calculate the asymptotic training and test errors for random feature regression using the corresponding kernel regression.</description><identifier>ISSN: 1050-5164</identifier><identifier>EISSN: 2168-8737</identifier><identifier>DOI: 10.1214/23-AAP2010</identifier><language>eng</language><publisher>Hayward: Institute of Mathematical Statistics</publisher><subject>Asymptotic properties ; Constraining ; Covariance matrix ; Deformation ; Eigenvalues ; Lower bounds ; Matrix ; Neural networks ; Regression</subject><ispartof>The Annals of applied probability, 2024-04, Vol.34 (2), p.1896</ispartof><rights>Copyright Institute of Mathematical Statistics Apr 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c218t-8b95d94dbc4ae6dca78fff4517e17cc29a9d0833bb63e8e3c49ba969b7a488f73</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,776,780,27901,27902</link.rule.ids></links><search><creatorcontrib>Wang, Zhichao</creatorcontrib><creatorcontrib>Zhu, Yizhe</creatorcontrib><title>Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks</title><title>The Annals of applied probability</title><description>In this paper, we investigate a two-layer fully connected neural network of the form f (X) = 1/√d1 a⊤σ (W X), where X ∈ d0 × n is a deterministic data matrix, W ∈ Rd1 × d0 and a ∈ Rd1 are random Gaussian weights, and σ is a nonlinear activation function. We study the limiting spectral distributions of two empirical kernel matrices associated with f (X): the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear-width regime (d1 ≍ n). We focus on the ultra-wide regime, where the width d1 of the first layer is much larger than the sample size n. Under appropriate assumptions on X and σ, a deformed semicircle law emerges as d 1 / n → ∞ and n → ∞ . We first prove this limiting law for generalized sample covariance matrices with some dependency. To specify it for our neural network model, we provide a nonlinear Hanson–Wright inequality suitable for neural networks with random weights and Lipschitz activation functions. We also demonstrate nonasymptotic concentrations of the empirical CK and NTK around their limiting kernels in the spectral norm, along with lower bounds on their smallest eigenvalues. As an application, we show that random feature regression induced by the empirical kernel achieves the same asymptotic performance as its limiting kernel regression under the ultra-wide regime. This allows us to calculate the asymptotic training and test errors for random feature regression using the corresponding kernel regression.</description><subject>Asymptotic properties</subject><subject>Constraining</subject><subject>Covariance matrix</subject><subject>Deformation</subject><subject>Eigenvalues</subject><subject>Lower bounds</subject><subject>Matrix</subject><subject>Neural networks</subject><subject>Regression</subject><issn>1050-5164</issn><issn>2168-8737</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><recordid>eNotkE1LAzEYhIMoWKsXf0HAmxDN126SY6lWhYIe9Byy2TewdTepyS7Ff-9Ke5rDPMwwg9Atow-MM_nIBVmtPjhl9AwtOKs10Uqoc7RgtKKkYrW8RFel7CilRhq1QPAEIeUBWlxg6HyXfQ-4dwfsYot9ih7imN3YpYhTwDHFvovgMs6znwY8uDF3HgqeQ_DUzyg5dC3gCFN2_SzjIeXvco0ugusL3Jx0ib42z5_rV7J9f3lbr7bEc6ZHohtTtUa2jZcO6tY7pUMIsmIKmPKeG2daqoVomlqABuGlaZypTaOc1DoosUR3x9x9Tj8TlNHu0pTjXGkFrSom6Dx6pu6PlM-plAzB7nM3uPxrGbX_N1ou7OlG8QclN2bM</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Wang, Zhichao</creator><creator>Zhu, Yizhe</creator><general>Institute of Mathematical Statistics</general><scope>AAYXX</scope><scope>CITATION</scope><scope>JQ2</scope></search><sort><creationdate>20240401</creationdate><title>Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks</title><author>Wang, Zhichao ; Zhu, Yizhe</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c218t-8b95d94dbc4ae6dca78fff4517e17cc29a9d0833bb63e8e3c49ba969b7a488f73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Asymptotic properties</topic><topic>Constraining</topic><topic>Covariance matrix</topic><topic>Deformation</topic><topic>Eigenvalues</topic><topic>Lower bounds</topic><topic>Matrix</topic><topic>Neural networks</topic><topic>Regression</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Wang, Zhichao</creatorcontrib><creatorcontrib>Zhu, Yizhe</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Computer Science Collection</collection><jtitle>The Annals of applied probability</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Wang, Zhichao</au><au>Zhu, Yizhe</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks</atitle><jtitle>The Annals of applied probability</jtitle><date>2024-04-01</date><risdate>2024</risdate><volume>34</volume><issue>2</issue><spage>1896</spage><pages>1896-</pages><issn>1050-5164</issn><eissn>2168-8737</eissn><abstract>In this paper, we investigate a two-layer fully connected neural network of the form f (X) = 1/√d1 a⊤σ (W X), where X ∈ d0 × n is a deterministic data matrix, W ∈ Rd1 × d0 and a ∈ Rd1 are random Gaussian weights, and σ is a nonlinear activation function. We study the limiting spectral distributions of two empirical kernel matrices associated with f (X): the empirical conjugate kernel (CK) and neural tangent kernel (NTK), beyond the linear-width regime (d1 ≍ n). We focus on the ultra-wide regime, where the width d1 of the first layer is much larger than the sample size n. Under appropriate assumptions on X and σ, a deformed semicircle law emerges as d 1 / n → ∞ and n → ∞ . We first prove this limiting law for generalized sample covariance matrices with some dependency. To specify it for our neural network model, we provide a nonlinear Hanson–Wright inequality suitable for neural networks with random weights and Lipschitz activation functions. We also demonstrate nonasymptotic concentrations of the empirical CK and NTK around their limiting kernels in the spectral norm, along with lower bounds on their smallest eigenvalues. As an application, we show that random feature regression induced by the empirical kernel achieves the same asymptotic performance as its limiting kernel regression under the ultra-wide regime. This allows us to calculate the asymptotic training and test errors for random feature regression using the corresponding kernel regression.</abstract><cop>Hayward</cop><pub>Institute of Mathematical Statistics</pub><doi>10.1214/23-AAP2010</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1050-5164 |
ispartof | The Annals of applied probability, 2024-04, Vol.34 (2), p.1896 |
issn | 1050-5164 2168-8737 |
language | eng |
recordid | cdi_proquest_journals_3055130497 |
source | Project Euclid Complete |
subjects | Asymptotic properties Constraining Covariance matrix Deformation Eigenvalues Lower bounds Matrix Neural networks Regression |
title | Deformed semicircle law and concentration of nonlinear random matrices for ultra-wide neural networks |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T04%3A15%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deformed%20semicircle%20law%20and%20concentration%20of%20nonlinear%20random%20matrices%20for%20ultra-wide%20neural%20networks&rft.jtitle=The%20Annals%20of%20applied%20probability&rft.au=Wang,%20Zhichao&rft.date=2024-04-01&rft.volume=34&rft.issue=2&rft.spage=1896&rft.pages=1896-&rft.issn=1050-5164&rft.eissn=2168-8737&rft_id=info:doi/10.1214/23-AAP2010&rft_dat=%3Cproquest_cross%3E3055130497%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3055130497&rft_id=info:pmid/&rfr_iscdi=true |