Neural Networks as Kernel Learners: The Silent Alignment Effect
ICLR 2022 Neural networks in the lazy training regime converge to kernel machines. Can neural networks in the rich feature learning regime learn a kernel machine with a data-dependent kernel? We demonstrate that this can indeed happen due to a phenomenon we term silent alignment, which requires that...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Atanasov, Alexander Bordelon, Blake Pehlevan, Cengiz |
description | ICLR 2022 Neural networks in the lazy training regime converge to kernel machines. Can
neural networks in the rich feature learning regime learn a kernel machine with
a data-dependent kernel? We demonstrate that this can indeed happen due to a
phenomenon we term silent alignment, which requires that the tangent kernel of
a network evolves in eigenstructure while small and before the loss appreciably
decreases, and grows only in overall scale afterwards. We show that such an
effect takes place in homogenous neural networks with small initialization and
whitened data. We provide an analytical treatment of this effect in the linear
network case. In general, we find that the kernel develops a low-rank
contribution in the early phase of training, and then evolves in overall scale,
yielding a function equivalent to a kernel regression solution with the final
network's tangent kernel. The early spectral learning of the kernel depends on
the depth. We also demonstrate that non-whitened data can weaken the silent
alignment effect. |
doi_str_mv | 10.48550/arxiv.2111.00034 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2111_00034</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2111_00034</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-93ce8d2dba154d88f17200c60972ebb6d820a1f488c15b699b3af7fc749b1aa03</originalsourceid><addsrcrecordid>eNotz7tOwzAUxnEvDKjwAEz4BRKOL4ltFlRV5SKiMpA9OnaO2wg3ICfc3h5amP7f9Ek_xi4ElNpWFVxh_ho-SimEKAFA6VN2s6H3jIlvaP58zS8Tx4k_Uh4p8Ybwt3m65u2O-POQaJz5Mg3bcX9Y6xgpzGfsJGKa6Py_C9bertvVfdE83T2slk2BtdGFU4FsL3uPotK9tVEYCRBqcEaS93VvJaCI2togKl875xVGE4PRzgtEUAt2-Xd7FHRvedhj_u4Oku4oUT9LxEK6</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Neural Networks as Kernel Learners: The Silent Alignment Effect</title><source>arXiv.org</source><creator>Atanasov, Alexander ; Bordelon, Blake ; Pehlevan, Cengiz</creator><creatorcontrib>Atanasov, Alexander ; Bordelon, Blake ; Pehlevan, Cengiz</creatorcontrib><description>ICLR 2022 Neural networks in the lazy training regime converge to kernel machines. Can
neural networks in the rich feature learning regime learn a kernel machine with
a data-dependent kernel? We demonstrate that this can indeed happen due to a
phenomenon we term silent alignment, which requires that the tangent kernel of
a network evolves in eigenstructure while small and before the loss appreciably
decreases, and grows only in overall scale afterwards. We show that such an
effect takes place in homogenous neural networks with small initialization and
whitened data. We provide an analytical treatment of this effect in the linear
network case. In general, we find that the kernel develops a low-rank
contribution in the early phase of training, and then evolves in overall scale,
yielding a function equivalent to a kernel regression solution with the final
network's tangent kernel. The early spectral learning of the kernel depends on
the depth. We also demonstrate that non-whitened data can weaken the silent
alignment effect.</description><identifier>DOI: 10.48550/arxiv.2111.00034</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2021-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2111.00034$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2111.00034$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Atanasov, Alexander</creatorcontrib><creatorcontrib>Bordelon, Blake</creatorcontrib><creatorcontrib>Pehlevan, Cengiz</creatorcontrib><title>Neural Networks as Kernel Learners: The Silent Alignment Effect</title><description>ICLR 2022 Neural networks in the lazy training regime converge to kernel machines. Can
neural networks in the rich feature learning regime learn a kernel machine with
a data-dependent kernel? We demonstrate that this can indeed happen due to a
phenomenon we term silent alignment, which requires that the tangent kernel of
a network evolves in eigenstructure while small and before the loss appreciably
decreases, and grows only in overall scale afterwards. We show that such an
effect takes place in homogenous neural networks with small initialization and
whitened data. We provide an analytical treatment of this effect in the linear
network case. In general, we find that the kernel develops a low-rank
contribution in the early phase of training, and then evolves in overall scale,
yielding a function equivalent to a kernel regression solution with the final
network's tangent kernel. The early spectral learning of the kernel depends on
the depth. We also demonstrate that non-whitened data can weaken the silent
alignment effect.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7tOwzAUxnEvDKjwAEz4BRKOL4ltFlRV5SKiMpA9OnaO2wg3ICfc3h5amP7f9Ek_xi4ElNpWFVxh_ho-SimEKAFA6VN2s6H3jIlvaP58zS8Tx4k_Uh4p8Ybwt3m65u2O-POQaJz5Mg3bcX9Y6xgpzGfsJGKa6Py_C9bertvVfdE83T2slk2BtdGFU4FsL3uPotK9tVEYCRBqcEaS93VvJaCI2togKl875xVGE4PRzgtEUAt2-Xd7FHRvedhj_u4Oku4oUT9LxEK6</recordid><startdate>20211029</startdate><enddate>20211029</enddate><creator>Atanasov, Alexander</creator><creator>Bordelon, Blake</creator><creator>Pehlevan, Cengiz</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20211029</creationdate><title>Neural Networks as Kernel Learners: The Silent Alignment Effect</title><author>Atanasov, Alexander ; Bordelon, Blake ; Pehlevan, Cengiz</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-93ce8d2dba154d88f17200c60972ebb6d820a1f488c15b699b3af7fc749b1aa03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Atanasov, Alexander</creatorcontrib><creatorcontrib>Bordelon, Blake</creatorcontrib><creatorcontrib>Pehlevan, Cengiz</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Atanasov, Alexander</au><au>Bordelon, Blake</au><au>Pehlevan, Cengiz</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Neural Networks as Kernel Learners: The Silent Alignment Effect</atitle><date>2021-10-29</date><risdate>2021</risdate><abstract>ICLR 2022 Neural networks in the lazy training regime converge to kernel machines. Can
neural networks in the rich feature learning regime learn a kernel machine with
a data-dependent kernel? We demonstrate that this can indeed happen due to a
phenomenon we term silent alignment, which requires that the tangent kernel of
a network evolves in eigenstructure while small and before the loss appreciably
decreases, and grows only in overall scale afterwards. We show that such an
effect takes place in homogenous neural networks with small initialization and
whitened data. We provide an analytical treatment of this effect in the linear
network case. In general, we find that the kernel develops a low-rank
contribution in the early phase of training, and then evolves in overall scale,
yielding a function equivalent to a kernel regression solution with the final
network's tangent kernel. The early spectral learning of the kernel depends on
the depth. We also demonstrate that non-whitened data can weaken the silent
alignment effect.</abstract><doi>10.48550/arxiv.2111.00034</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2111.00034 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2111_00034 |
source | arXiv.org |
subjects | Computer Science - Learning Statistics - Machine Learning |
title | Neural Networks as Kernel Learners: The Silent Alignment Effect |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-24T13%3A10%3A45IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Neural%20Networks%20as%20Kernel%20Learners:%20The%20Silent%20Alignment%20Effect&rft.au=Atanasov,%20Alexander&rft.date=2021-10-29&rft_id=info:doi/10.48550/arxiv.2111.00034&rft_dat=%3Carxiv_GOX%3E2111_00034%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |