Statistical Physics of Deep Neural Networks: Initialization toward Optimal Channels
In deep learning, neural networks serve as noisy channels between input data and its representation. This perspective naturally relates deep learning with the pursuit of constructing channels with optimal performance in information transmission and representation. While considerable efforts are conc...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In deep learning, neural networks serve as noisy channels between input data
and its representation. This perspective naturally relates deep learning with
the pursuit of constructing channels with optimal performance in information
transmission and representation. While considerable efforts are concentrated on
realizing optimal channel properties during network optimization, we study a
frequently overlooked possibility that neural networks can be initialized
toward optimal channels. Our theory, consistent with experimental validation,
identifies primary mechanics underlying this unknown possibility and suggests
intrinsic connections between statistical physics and deep learning. Unlike the
conventional theories that characterize neural networks applying the classic
mean-filed approximation, we offer analytic proof that this extensively applied
simplification scheme is not valid in studying neural networks as information
channels. To fill this gap, we develop a corrected mean-field framework
applicable for characterizing the limiting behaviors of information propagation
in neural networks without strong assumptions on inputs. Based on it, we
propose an analytic theory to prove that mutual information maximization is
realized between inputs and propagated signals when neural networks are
initialized at dynamic isometry, a case where information transmits via
norm-preserving mappings. These theoretical predictions are validated by
experiments on real neural networks, suggesting the robustness of our theory
against finite-size effects. Finally, we analyze our findings with information
bottleneck theory to confirm the precise relations among dynamic isometry,
mutual information maximization, and optimal channel properties in deep
learning. |
---|---|
DOI: | 10.48550/arxiv.2212.01744 |