On the Importance of Calibration in Semi-supervised Learning
State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been highly successful in leveraging a mix of labeled and unlabeled data by combining techniques of consistency regularization and pseudo-labeling. During pseudo-labeling, the model's predictions on unlabeled data are used for...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | State-of-the-art (SOTA) semi-supervised learning (SSL) methods have been
highly successful in leveraging a mix of labeled and unlabeled data by
combining techniques of consistency regularization and pseudo-labeling. During
pseudo-labeling, the model's predictions on unlabeled data are used for
training and thus, model calibration is important in mitigating confirmation
bias. Yet, many SOTA methods are optimized for model performance, with little
focus directed to improve model calibration. In this work, we empirically
demonstrate that model calibration is strongly correlated with model
performance and propose to improve calibration via approximate Bayesian
techniques. We introduce a family of new SSL models that optimizes for
calibration and demonstrate their effectiveness across standard vision
benchmarks of CIFAR-10, CIFAR-100 and ImageNet, giving up to 15.9% improvement
in test accuracy. Furthermore, we also demonstrate their effectiveness in
additional realistic and challenging problems, such as class-imbalanced
datasets and in photonics science. |
---|---|
DOI: | 10.48550/arxiv.2210.04783 |