Topologically Densified Distributions
We study regularization in the context of small sample-size learning with over-parameterized neural networks. Specifically, we shift focus from architectural properties, such as norms on the network weights, to properties of the internal representations before a linear classifier. Specifically, we i...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study regularization in the context of small sample-size learning with
over-parameterized neural networks. Specifically, we shift focus from
architectural properties, such as norms on the network weights, to properties
of the internal representations before a linear classifier. Specifically, we
impose a topological constraint on samples drawn from the probability measure
induced in that space. This provably leads to mass concentration effects around
the representations of training instances, i.e., a property beneficial for
generalization. By leveraging previous work to impose topological constraints
in a neural network setting, we provide empirical evidence (across various
vision benchmarks) to support our claim for better generalization. |
---|---|
DOI: | 10.48550/arxiv.2002.04805 |