Information-Theoretic Generalization Bounds for Deep Neural Networks
Deep neural networks (DNNs) exhibit an exceptional capacity for generalization in practical applications. This work aims to capture the effect and benefits of depth for supervised learning via information-theoretic generalization bounds. We first derive two hierarchical bounds on the generalization...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep neural networks (DNNs) exhibit an exceptional capacity for
generalization in practical applications. This work aims to capture the effect
and benefits of depth for supervised learning via information-theoretic
generalization bounds. We first derive two hierarchical bounds on the
generalization error in terms of the Kullback-Leibler (KL) divergence or the
1-Wasserstein distance between the train and test distributions of the network
internal representations. The KL divergence bound shrinks as the layer index
increases, while the Wasserstein bound implies the existence of a layer that
serves as a generalization funnel, which attains a minimal 1-Wasserstein
distance. Analytic expressions for both bounds are derived under the setting of
binary Gaussian classification with linear DNNs. To quantify the contraction of
the relevant information measures when moving deeper into the network, we
analyze the strong data processing inequality (SDPI) coefficient between
consecutive layers of three regularized DNN models: Dropout, DropConnect, and
Gaussian noise injection. This enables refining our generalization bounds to
capture the contraction as a function of the network architecture parameters.
Specializing our results to DNNs with a finite parameter space and the Gibbs
algorithm reveals that deeper yet narrower network architectures generalize
better in those examples, although how broadly this statement applies remains a
question. |
---|---|
DOI: | 10.48550/arxiv.2404.03176 |