There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average
Presently the most successful approaches to semi-supervised learning are based on consistency regularization, whereby a model is trained to be robust to small perturbations of its inputs and parameters. To understand consistency regularization, we conceptually explore how loss geometry interacts wit...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Presently the most successful approaches to semi-supervised learning are
based on consistency regularization, whereby a model is trained to be robust to
small perturbations of its inputs and parameters. To understand consistency
regularization, we conceptually explore how loss geometry interacts with
training procedures. The consistency loss dramatically improves generalization
performance over supervised-only training; however, we show that SGD struggles
to converge on the consistency loss and continues to make large steps that lead
to changes in predictions on the test data. Motivated by these observations, we
propose to train consistency-based methods with Stochastic Weight Averaging
(SWA), a recent approach which averages weights along the trajectory of SGD
with a modified learning rate schedule. We also propose fast-SWA, which further
accelerates convergence by averaging multiple points within each cycle of a
cyclical learning rate schedule. With weight averaging, we achieve the best
known semi-supervised results on CIFAR-10 and CIFAR-100, over many different
quantities of labeled training data. For example, we achieve 5.0% error on
CIFAR-10 with only 4000 labels, compared to the previous best result in the
literature of 6.3%. |
---|---|
DOI: | 10.48550/arxiv.1806.05594 |