Discovering and Explaining the Representation Bottleneck of DNNs
This paper explores the bottleneck of feature representations of deep neural networks (DNNs), from the perspective of the complexity of interactions between input variables encoded in DNNs. To this end, we focus on the multi-order interaction between input variables, where the order represents the c...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper explores the bottleneck of feature representations of deep neural
networks (DNNs), from the perspective of the complexity of interactions between
input variables encoded in DNNs. To this end, we focus on the multi-order
interaction between input variables, where the order represents the complexity
of interactions. We discover that a DNN is more likely to encode both too
simple interactions and too complex interactions, but usually fails to learn
interactions of intermediate complexity. Such a phenomenon is widely shared by
different DNNs for different tasks. This phenomenon indicates a cognition gap
between DNNs and human beings, and we call it a representation bottleneck. We
theoretically prove the underlying reason for the representation bottleneck.
Furthermore, we propose a loss to encourage/penalize the learning of
interactions of specific complexities, and analyze the representation
capacities of interactions of different complexities. |
---|---|
DOI: | 10.48550/arxiv.2111.06236 |