Mildly Overparametrized Neural Nets can Memorize Training Data Efficiently
It has been observed \citep{zhang2016understanding} that deep neural networks can memorize: they achieve 100\% accuracy on training data. Recent theoretical results explained such behavior in highly overparametrized regimes, where the number of neurons in each layer is larger than the number of trai...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | It has been observed \citep{zhang2016understanding} that deep neural networks
can memorize: they achieve 100\% accuracy on training data. Recent theoretical
results explained such behavior in highly overparametrized regimes, where the
number of neurons in each layer is larger than the number of training samples.
In this paper, we show that neural networks can be trained to memorize training
data perfectly in a mildly overparametrized regime, where the number of
parameters is just a constant factor more than the number of training samples,
and the number of neurons is much smaller. |
---|---|
DOI: | 10.48550/arxiv.1909.11837 |