Discovering and Explaining the Representation Bottleneck of DNNs

This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-11
Hauptverfasser: Deng, Huiqi, Ren, Qihan, Zhang, Hao, Zhang, Quanshi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Deng, Huiqi
Ren, Qihan
Zhang, Hao
Zhang, Quanshi
description This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple interactions and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and human beings, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose a loss to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2596814706</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2596814706</sourcerecordid><originalsourceid>FETCH-proquest_journals_25968147063</originalsourceid><addsrcrecordid>eNqNiksKwjAUAIMgWLR3CLgupEl_7kRbcdWFuC-hvmpqeal5qXh8FTyAq2GYmbFAKhVHRSLlgoVEvRBCZrlMUxWwbWmotU9wBq9c44VXr3HQBr_qb8BPMDogQK-9sch31vsBENo7tx0v65pWbN7pgSD8ccnWh-q8P0ajs48JyDe9nRx-UiPTTVbESS4y9d_1BkCHOQs</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2596814706</pqid></control><display><type>article</type><title>Discovering and Explaining the Representation Bottleneck of DNNs</title><source>Free E- Journals</source><creator>Deng, Huiqi ; Ren, Qihan ; Zhang, Hao ; Zhang, Quanshi</creator><creatorcontrib>Deng, Huiqi ; Ren, Qihan ; Zhang, Hao ; Zhang, Quanshi</creatorcontrib><description>This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple interactions and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and human beings, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose a loss to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Artificial neural networks ; Cognition ; Complexity ; Machine learning ; Representations</subject><ispartof>arXiv.org, 2022-11</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Deng, Huiqi</creatorcontrib><creatorcontrib>Ren, Qihan</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><creatorcontrib>Zhang, Quanshi</creatorcontrib><title>Discovering and Explaining the Representation Bottleneck of DNNs</title><title>arXiv.org</title><description>This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple interactions and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and human beings, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose a loss to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities.</description><subject>Artificial neural networks</subject><subject>Cognition</subject><subject>Complexity</subject><subject>Machine learning</subject><subject>Representations</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNiksKwjAUAIMgWLR3CLgupEl_7kRbcdWFuC-hvmpqeal5qXh8FTyAq2GYmbFAKhVHRSLlgoVEvRBCZrlMUxWwbWmotU9wBq9c44VXr3HQBr_qb8BPMDogQK-9sch31vsBENo7tx0v65pWbN7pgSD8ccnWh-q8P0ajs48JyDe9nRx-UiPTTVbESS4y9d_1BkCHOQs</recordid><startdate>20221107</startdate><enddate>20221107</enddate><creator>Deng, Huiqi</creator><creator>Ren, Qihan</creator><creator>Zhang, Hao</creator><creator>Zhang, Quanshi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20221107</creationdate><title>Discovering and Explaining the Representation Bottleneck of DNNs</title><author>Deng, Huiqi ; Ren, Qihan ; Zhang, Hao ; Zhang, Quanshi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_25968147063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Artificial neural networks</topic><topic>Cognition</topic><topic>Complexity</topic><topic>Machine learning</topic><topic>Representations</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Huiqi</creatorcontrib><creatorcontrib>Ren, Qihan</creatorcontrib><creatorcontrib>Zhang, Hao</creatorcontrib><creatorcontrib>Zhang, Quanshi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Deng, Huiqi</au><au>Ren, Qihan</au><au>Zhang, Hao</au><au>Zhang, Quanshi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Discovering and Explaining the Representation Bottleneck of DNNs</atitle><jtitle>arXiv.org</jtitle><date>2022-11-07</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the complexity of interactions. We discover that a DNN is more likely to encode both too simple interactions and too complex interactions, but usually fails to learn interactions of intermediate complexity. Such a phenomenon is widely shared by different DNNs for different tasks. This phenomenon indicates a cognition gap between DNNs and human beings, and we call it a representation bottleneck. We theoretically prove the underlying reason for the representation bottleneck. Furthermore, we propose a loss to encourage/penalize the learning of interactions of specific complexities, and analyze the representation capacities of interactions of different complexities.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-11
issn 2331-8422
language eng
recordid cdi_proquest_journals_2596814706
source Free E- Journals
subjects Artificial neural networks
Cognition
Complexity
Machine learning
Representations
title Discovering and Explaining the Representation Bottleneck of DNNs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-29T15%3A12%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Discovering%20and%20Explaining%20the%20Representation%20Bottleneck%20of%20DNNs&rft.jtitle=arXiv.org&rft.au=Deng,%20Huiqi&rft.date=2022-11-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2596814706%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2596814706&rft_id=info:pmid/&rfr_iscdi=true