Why Does Deep and Cheap Learning Work So Well?
We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “c...
Gespeichert in:
Veröffentlicht in: | Journal of statistical physics 2017-09, Vol.168 (6), p.1223-1247 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1247 |
---|---|
container_issue | 6 |
container_start_page | 1223 |
container_title | Journal of statistical physics |
container_volume | 168 |
creator | Lin, Henry W. Tegmark, Max Rolnick, David |
description | We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that
n
variables cannot be multiplied using fewer than
2
n
neurons in a single hidden layer. |
doi_str_mv | 10.1007/s10955-017-1836-5 |
format | Article |
fullrecord | <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_1933664042</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A502362785</galeid><sourcerecordid>A502362785</sourcerecordid><originalsourceid>FETCH-LOGICAL-c398t-ef5135691ed3f9b611c36ddb548396011ad711ef94a129ff2e7783cec31b37723</originalsourceid><addsrcrecordid>eNp1kD1PwzAQhi0EEqXwA9gsMbv47NiOJ1S1fEmVGAB1tNzk3KakSbHbof-eRGFgQTecdHqfu9NDyC3wCXBu7hNwqxTjYBjkUjN1RkagjGBWgzwnI86FYJkBdUmuUtpyzm1u1YhMlpsTnbeY6BxxT31T0tkG_Z4u0MematZ02cYv-t7SJdb1wzW5CL5OePPbx-Tz6fFj9sIWb8-vs-mCFdLmB4ZBgVTaApYy2JUGKKQuy5XKcmk1B_ClAcBgMw_ChiDQmFwWWEhYSWOEHJO7Ye8-tt9HTAe3bY-x6U46sFJqnfGsT02G1NrX6KomtIfoi65K3FVF22CouvlUcSG1MLnqABiAIrYpRQxuH6udjycH3PUe3eDRdR5d79H1jBiY1GWbNcY_r_wL_QDDSHGH</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>1933664042</pqid></control><display><type>article</type><title>Why Does Deep and Cheap Learning Work So Well?</title><source>SpringerLink Journals</source><creator>Lin, Henry W. ; Tegmark, Max ; Rolnick, David</creator><creatorcontrib>Lin, Henry W. ; Tegmark, Max ; Rolnick, David</creatorcontrib><description>We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that
n
variables cannot be multiplied using fewer than
2
n
neurons in a single hidden layer.</description><identifier>ISSN: 0022-4715</identifier><identifier>EISSN: 1572-9613</identifier><identifier>DOI: 10.1007/s10955-017-1836-5</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Approximation ; Artificial neural networks ; Functions (mathematics) ; Information theory ; Machine learning ; Mathematical analysis ; Mathematical and Computational Physics ; Neural networks ; Neurons ; Physical Chemistry ; Physics ; Physics and Astronomy ; Quantum Physics ; Statistical Physics and Dynamical Systems ; Theorems ; Theoretical</subject><ispartof>Journal of statistical physics, 2017-09, Vol.168 (6), p.1223-1247</ispartof><rights>Springer Science+Business Media, LLC 2017</rights><rights>COPYRIGHT 2017 Springer</rights><rights>Copyright Springer Science & Business Media 2017</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c398t-ef5135691ed3f9b611c36ddb548396011ad711ef94a129ff2e7783cec31b37723</citedby><cites>FETCH-LOGICAL-c398t-ef5135691ed3f9b611c36ddb548396011ad711ef94a129ff2e7783cec31b37723</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s10955-017-1836-5$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s10955-017-1836-5$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Lin, Henry W.</creatorcontrib><creatorcontrib>Tegmark, Max</creatorcontrib><creatorcontrib>Rolnick, David</creatorcontrib><title>Why Does Deep and Cheap Learning Work So Well?</title><title>Journal of statistical physics</title><addtitle>J Stat Phys</addtitle><description>We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that
n
variables cannot be multiplied using fewer than
2
n
neurons in a single hidden layer.</description><subject>Approximation</subject><subject>Artificial neural networks</subject><subject>Functions (mathematics)</subject><subject>Information theory</subject><subject>Machine learning</subject><subject>Mathematical analysis</subject><subject>Mathematical and Computational Physics</subject><subject>Neural networks</subject><subject>Neurons</subject><subject>Physical Chemistry</subject><subject>Physics</subject><subject>Physics and Astronomy</subject><subject>Quantum Physics</subject><subject>Statistical Physics and Dynamical Systems</subject><subject>Theorems</subject><subject>Theoretical</subject><issn>0022-4715</issn><issn>1572-9613</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><recordid>eNp1kD1PwzAQhi0EEqXwA9gsMbv47NiOJ1S1fEmVGAB1tNzk3KakSbHbof-eRGFgQTecdHqfu9NDyC3wCXBu7hNwqxTjYBjkUjN1RkagjGBWgzwnI86FYJkBdUmuUtpyzm1u1YhMlpsTnbeY6BxxT31T0tkG_Z4u0MematZ02cYv-t7SJdb1wzW5CL5OePPbx-Tz6fFj9sIWb8-vs-mCFdLmB4ZBgVTaApYy2JUGKKQuy5XKcmk1B_ClAcBgMw_ChiDQmFwWWEhYSWOEHJO7Ye8-tt9HTAe3bY-x6U46sFJqnfGsT02G1NrX6KomtIfoi65K3FVF22CouvlUcSG1MLnqABiAIrYpRQxuH6udjycH3PUe3eDRdR5d79H1jBiY1GWbNcY_r_wL_QDDSHGH</recordid><startdate>20170901</startdate><enddate>20170901</enddate><creator>Lin, Henry W.</creator><creator>Tegmark, Max</creator><creator>Rolnick, David</creator><general>Springer US</general><general>Springer</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>20170901</creationdate><title>Why Does Deep and Cheap Learning Work So Well?</title><author>Lin, Henry W. ; Tegmark, Max ; Rolnick, David</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c398t-ef5135691ed3f9b611c36ddb548396011ad711ef94a129ff2e7783cec31b37723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Approximation</topic><topic>Artificial neural networks</topic><topic>Functions (mathematics)</topic><topic>Information theory</topic><topic>Machine learning</topic><topic>Mathematical analysis</topic><topic>Mathematical and Computational Physics</topic><topic>Neural networks</topic><topic>Neurons</topic><topic>Physical Chemistry</topic><topic>Physics</topic><topic>Physics and Astronomy</topic><topic>Quantum Physics</topic><topic>Statistical Physics and Dynamical Systems</topic><topic>Theorems</topic><topic>Theoretical</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Lin, Henry W.</creatorcontrib><creatorcontrib>Tegmark, Max</creatorcontrib><creatorcontrib>Rolnick, David</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of statistical physics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Lin, Henry W.</au><au>Tegmark, Max</au><au>Rolnick, David</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Why Does Deep and Cheap Learning Work So Well?</atitle><jtitle>Journal of statistical physics</jtitle><stitle>J Stat Phys</stitle><date>2017-09-01</date><risdate>2017</risdate><volume>168</volume><issue>6</issue><spage>1223</spage><epage>1247</epage><pages>1223-1247</pages><issn>0022-4715</issn><eissn>1572-9613</eissn><abstract>We show how the success of deep learning could depend not only on mathematics but also on physics: although well-known mathematical theorems guarantee that neural networks can approximate arbitrary functions well, the class of functions of practical interest can frequently be approximated through “cheap learning” with exponentially fewer parameters than generic ones. We explore how properties frequently encountered in physics such as symmetry, locality, compositionality, and polynomial log-probability translate into exceptionally simple neural networks. We further argue that when the statistical process generating the data is of a certain hierarchical form prevalent in physics and machine learning, a deep neural network can be more efficient than a shallow one. We formalize these claims using information theory and discuss the relation to the renormalization group. We prove various “no-flattening theorems” showing when efficient linear deep networks cannot be accurately approximated by shallow ones without efficiency loss; for example, we show that
n
variables cannot be multiplied using fewer than
2
n
neurons in a single hidden layer.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s10955-017-1836-5</doi><tpages>25</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0022-4715 |
ispartof | Journal of statistical physics, 2017-09, Vol.168 (6), p.1223-1247 |
issn | 0022-4715 1572-9613 |
language | eng |
recordid | cdi_proquest_journals_1933664042 |
source | SpringerLink Journals |
subjects | Approximation Artificial neural networks Functions (mathematics) Information theory Machine learning Mathematical analysis Mathematical and Computational Physics Neural networks Neurons Physical Chemistry Physics Physics and Astronomy Quantum Physics Statistical Physics and Dynamical Systems Theorems Theoretical |
title | Why Does Deep and Cheap Learning Work So Well? |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-09T12%3A58%3A06IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Why%20Does%20Deep%20and%20Cheap%20Learning%20Work%20So%20Well?&rft.jtitle=Journal%20of%20statistical%20physics&rft.au=Lin,%20Henry%20W.&rft.date=2017-09-01&rft.volume=168&rft.issue=6&rft.spage=1223&rft.epage=1247&rft.pages=1223-1247&rft.issn=0022-4715&rft.eissn=1572-9613&rft_id=info:doi/10.1007/s10955-017-1836-5&rft_dat=%3Cgale_proqu%3EA502362785%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=1933664042&rft_id=info:pmid/&rft_galeid=A502362785&rfr_iscdi=true |