End-to-End Training Induces Information Bottleneck through Layer-Role Differentiation: A Comparative Analysis with Layer-wise Training
End-to-end (E2E) training, optimizing the entire model through error backpropagation, fundamentally supports the advancements of deep learning. Despite its high performance, E2E training faces the problems of memory consumption, parallel computing, and discrepancy with the functionalities of the act...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | End-to-end (E2E) training, optimizing the entire model through error
backpropagation, fundamentally supports the advancements of deep learning.
Despite its high performance, E2E training faces the problems of memory
consumption, parallel computing, and discrepancy with the functionalities of
the actual brain. Various alternative methods have been proposed to overcome
these difficulties; however, no one can yet match the performance of E2E
training, thereby falling short in practicality. Furthermore, there is no deep
understanding regarding differences in the trained model properties beyond the
performance gap. In this paper, we reconsider why E2E training demonstrates a
superior performance through a comparison with layer-wise training, a non-E2E
method that locally sets errors. On the basis of the observation that E2E
training has an advantage in propagating input information, we analyze the
information plane dynamics of intermediate representations based on the
Hilbert-Schmidt independence criterion (HSIC). The results of our normalized
HSIC value analysis reveal the E2E training ability to exhibit different
information dynamics across layers, in addition to efficient information
propagation. Furthermore, we show that this layer-role differentiation leads to
the final representation following the information bottleneck principle. It
suggests the need to consider the cooperative interactions between layers, not
just the final layer when analyzing the information bottleneck of deep
learning. |
---|---|
DOI: | 10.48550/arxiv.2402.09050 |