Stability for the Training of Deep Neural Networks and Other Classifiers
We examine the stability of loss-minimizing training processes that are used for deep neural networks (DNN) and other classifiers. While a classifier is optimized during training through a so-called loss function, the performance of classifiers is usually evaluated by some measure of accuracy, such...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We examine the stability of loss-minimizing training processes that are used
for deep neural networks (DNN) and other classifiers. While a classifier is
optimized during training through a so-called loss function, the performance of
classifiers is usually evaluated by some measure of accuracy, such as the
overall accuracy which quantifies the proportion of objects that are well
classified. This leads to the guiding question of stability: does decreasing
loss through training always result in increased accuracy? We formalize the
notion of stability, and provide examples of instability. Our main result
consists of two novel conditions on the classifier which, if either is
satisfied, ensure stability of training, that is we derive tight bounds on
accuracy as loss decreases. We also derive a sufficient condition for stability
on the training set alone, identifying flat portions of the data manifold as
potential sources of instability. The latter condition is explicitly verifiable
on the training dataset. Our results do not depend on the algorithm used for
training, as long as loss decreases with training. |
---|---|
DOI: | 10.48550/arxiv.2002.04122 |