The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data

Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn very large datasets of random patterns. We verify that human lea...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural networks 2023-04, Vol.161, p.515-524
Hauptverfasser: Tsvetkov, Christian, Malhotra, Gaurav, Evans, Benjamin D., Bowers, Jeffrey S.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 524
container_issue
container_start_page 515
container_title Neural networks
container_volume 161
creator Tsvetkov, Christian
Malhotra, Gaurav
Evans, Benjamin D.
Bowers, Jeffrey S.
description Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn very large datasets of random patterns. We verify that human learning on such tasks is extremely limited, even with few stimuli. We argue that the performance difference is due to CNNs’ overcapacity and introduce biologically inspired mechanisms to constrain it, while retaining the good test set generalisation to structured images as characteristic of CNNs. We investigate the efficacy of adding noise to hidden units’ activations, restricting early convolutional layers with a bottleneck, and using a bounded activation function. Internal noise was the most potent intervention and the only one which, by itself, could reduce random data performance in the tested models to chance levels. We also investigated whether networks with biologically inspired capacity constraints show improved generalisation to out-of-distribution stimuli, however little benefit was observed. Our results suggest that constraining networks with biologically motivated mechanisms paves the way for closer correspondence between network and human performance, but the few manipulations we have tested are only a small step towards that goal.
doi_str_mv 10.1016/j.neunet.2023.01.011
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2778974174</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0893608023000114</els_id><sourcerecordid>2778974174</sourcerecordid><originalsourceid>FETCH-LOGICAL-c408t-70999fb92c9be66972c917053553a502f6036b355eccb8bcff384ef19bb99f373</originalsourceid><addsrcrecordid>eNp9kE2LFDEQhoMo7rj6D0Ry9NJjpdOdj4sggx8Li17Wc0inK5qxJxmT9Mj-e7M7q8eFgqqC562Xegl5zWDLgIl3-23ENWLd9tDzLbBW7AnZMCV110vVPyUbUJp3AhRckBel7AFAqIE_JxdcKBh7ITYk3PxEmtOCNHnq7NG6UG-pS7HUbEOshYZIdyme0rLWkKJd6Fdc832rf1L-VahPmS5ocwzxB802zulAT5jLWmi09Z6dbbUvyTNvl4KvHvol-f7p483uS3f97fPV7sN15wZQtZOgtfaT7p2eUAgt28AkjHwcuR2h9wK4mNqGzk1qct5zNaBnepqajkt-Sd6e7x5z-r1iqeYQisNlsRHTWkwvpdJyYHJo6HBGXU6lZPTmmMPB5lvDwNyFbPbmHLK5C9kAa8Wa7M2DwzodcP4v-pdqA96fAWx_ngJmU1zA6HAOGV01cwqPO_wFxRSRFQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2778974174</pqid></control><display><type>article</type><title>The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data</title><source>MEDLINE</source><source>Elsevier ScienceDirect Journals</source><creator>Tsvetkov, Christian ; Malhotra, Gaurav ; Evans, Benjamin D. ; Bowers, Jeffrey S.</creator><creatorcontrib>Tsvetkov, Christian ; Malhotra, Gaurav ; Evans, Benjamin D. ; Bowers, Jeffrey S.</creatorcontrib><description>Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn very large datasets of random patterns. We verify that human learning on such tasks is extremely limited, even with few stimuli. We argue that the performance difference is due to CNNs’ overcapacity and introduce biologically inspired mechanisms to constrain it, while retaining the good test set generalisation to structured images as characteristic of CNNs. We investigate the efficacy of adding noise to hidden units’ activations, restricting early convolutional layers with a bottleneck, and using a bounded activation function. Internal noise was the most potent intervention and the only one which, by itself, could reduce random data performance in the tested models to chance levels. We also investigated whether networks with biologically inspired capacity constraints show improved generalisation to out-of-distribution stimuli, however little benefit was observed. Our results suggest that constraining networks with biologically motivated mechanisms paves the way for closer correspondence between network and human performance, but the few manipulations we have tested are only a small step towards that goal.</description><identifier>ISSN: 0893-6080</identifier><identifier>EISSN: 1879-2782</identifier><identifier>DOI: 10.1016/j.neunet.2023.01.011</identifier><identifier>PMID: 36805266</identifier><language>eng</language><publisher>United States: Elsevier Ltd</publisher><subject>Biological constraints ; Bottleneck ; Capacity ; Deep Neural Networks ; Generalization, Psychological ; Humans ; Internal noise ; Learning ; Neural Networks, Computer</subject><ispartof>Neural networks, 2023-04, Vol.161, p.515-524</ispartof><rights>2023 The Author(s)</rights><rights>Copyright © 2023 The Author(s). Published by Elsevier Ltd.. All rights reserved.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c408t-70999fb92c9be66972c917053553a502f6036b355eccb8bcff384ef19bb99f373</citedby><cites>FETCH-LOGICAL-c408t-70999fb92c9be66972c917053553a502f6036b355eccb8bcff384ef19bb99f373</cites><orcidid>0000-0002-1853-3521</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.neunet.2023.01.011$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,777,781,3537,27905,27906,45976</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/36805266$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Tsvetkov, Christian</creatorcontrib><creatorcontrib>Malhotra, Gaurav</creatorcontrib><creatorcontrib>Evans, Benjamin D.</creatorcontrib><creatorcontrib>Bowers, Jeffrey S.</creatorcontrib><title>The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data</title><title>Neural networks</title><addtitle>Neural Netw</addtitle><description>Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn very large datasets of random patterns. We verify that human learning on such tasks is extremely limited, even with few stimuli. We argue that the performance difference is due to CNNs’ overcapacity and introduce biologically inspired mechanisms to constrain it, while retaining the good test set generalisation to structured images as characteristic of CNNs. We investigate the efficacy of adding noise to hidden units’ activations, restricting early convolutional layers with a bottleneck, and using a bounded activation function. Internal noise was the most potent intervention and the only one which, by itself, could reduce random data performance in the tested models to chance levels. We also investigated whether networks with biologically inspired capacity constraints show improved generalisation to out-of-distribution stimuli, however little benefit was observed. Our results suggest that constraining networks with biologically motivated mechanisms paves the way for closer correspondence between network and human performance, but the few manipulations we have tested are only a small step towards that goal.</description><subject>Biological constraints</subject><subject>Bottleneck</subject><subject>Capacity</subject><subject>Deep Neural Networks</subject><subject>Generalization, Psychological</subject><subject>Humans</subject><subject>Internal noise</subject><subject>Learning</subject><subject>Neural Networks, Computer</subject><issn>0893-6080</issn><issn>1879-2782</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><recordid>eNp9kE2LFDEQhoMo7rj6D0Ry9NJjpdOdj4sggx8Li17Wc0inK5qxJxmT9Mj-e7M7q8eFgqqC562Xegl5zWDLgIl3-23ENWLd9tDzLbBW7AnZMCV110vVPyUbUJp3AhRckBel7AFAqIE_JxdcKBh7ITYk3PxEmtOCNHnq7NG6UG-pS7HUbEOshYZIdyme0rLWkKJd6Fdc832rf1L-VahPmS5ocwzxB802zulAT5jLWmi09Z6dbbUvyTNvl4KvHvol-f7p483uS3f97fPV7sN15wZQtZOgtfaT7p2eUAgt28AkjHwcuR2h9wK4mNqGzk1qct5zNaBnepqajkt-Sd6e7x5z-r1iqeYQisNlsRHTWkwvpdJyYHJo6HBGXU6lZPTmmMPB5lvDwNyFbPbmHLK5C9kAa8Wa7M2DwzodcP4v-pdqA96fAWx_ngJmU1zA6HAOGV01cwqPO_wFxRSRFQ</recordid><startdate>202304</startdate><enddate>202304</enddate><creator>Tsvetkov, Christian</creator><creator>Malhotra, Gaurav</creator><creator>Evans, Benjamin D.</creator><creator>Bowers, Jeffrey S.</creator><general>Elsevier Ltd</general><scope>6I.</scope><scope>AAFTH</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0002-1853-3521</orcidid></search><sort><creationdate>202304</creationdate><title>The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data</title><author>Tsvetkov, Christian ; Malhotra, Gaurav ; Evans, Benjamin D. ; Bowers, Jeffrey S.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c408t-70999fb92c9be66972c917053553a502f6036b355eccb8bcff384ef19bb99f373</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Biological constraints</topic><topic>Bottleneck</topic><topic>Capacity</topic><topic>Deep Neural Networks</topic><topic>Generalization, Psychological</topic><topic>Humans</topic><topic>Internal noise</topic><topic>Learning</topic><topic>Neural Networks, Computer</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Tsvetkov, Christian</creatorcontrib><creatorcontrib>Malhotra, Gaurav</creatorcontrib><creatorcontrib>Evans, Benjamin D.</creatorcontrib><creatorcontrib>Bowers, Jeffrey S.</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Neural networks</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Tsvetkov, Christian</au><au>Malhotra, Gaurav</au><au>Evans, Benjamin D.</au><au>Bowers, Jeffrey S.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data</atitle><jtitle>Neural networks</jtitle><addtitle>Neural Netw</addtitle><date>2023-04</date><risdate>2023</risdate><volume>161</volume><spage>515</spage><epage>524</epage><pages>515-524</pages><issn>0893-6080</issn><eissn>1879-2782</eissn><abstract>Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn very large datasets of random patterns. We verify that human learning on such tasks is extremely limited, even with few stimuli. We argue that the performance difference is due to CNNs’ overcapacity and introduce biologically inspired mechanisms to constrain it, while retaining the good test set generalisation to structured images as characteristic of CNNs. We investigate the efficacy of adding noise to hidden units’ activations, restricting early convolutional layers with a bottleneck, and using a bounded activation function. Internal noise was the most potent intervention and the only one which, by itself, could reduce random data performance in the tested models to chance levels. We also investigated whether networks with biologically inspired capacity constraints show improved generalisation to out-of-distribution stimuli, however little benefit was observed. Our results suggest that constraining networks with biologically motivated mechanisms paves the way for closer correspondence between network and human performance, but the few manipulations we have tested are only a small step towards that goal.</abstract><cop>United States</cop><pub>Elsevier Ltd</pub><pmid>36805266</pmid><doi>10.1016/j.neunet.2023.01.011</doi><tpages>10</tpages><orcidid>https://orcid.org/0000-0002-1853-3521</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 0893-6080
ispartof Neural networks, 2023-04, Vol.161, p.515-524
issn 0893-6080
1879-2782
language eng
recordid cdi_proquest_miscellaneous_2778974174
source MEDLINE; Elsevier ScienceDirect Journals
subjects Biological constraints
Bottleneck
Capacity
Deep Neural Networks
Generalization, Psychological
Humans
Internal noise
Learning
Neural Networks, Computer
title The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T22%3A05%3A44IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=The%20role%20of%20capacity%20constraints%20in%20Convolutional%20Neural%20Networks%20for%20learning%20random%20versus%20natural%20data&rft.jtitle=Neural%20networks&rft.au=Tsvetkov,%20Christian&rft.date=2023-04&rft.volume=161&rft.spage=515&rft.epage=524&rft.pages=515-524&rft.issn=0893-6080&rft.eissn=1879-2782&rft_id=info:doi/10.1016/j.neunet.2023.01.011&rft_dat=%3Cproquest_cross%3E2778974174%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2778974174&rft_id=info:pmid/36805266&rft_els_id=S0893608023000114&rfr_iscdi=true