An Exploration into why Output Regularization Mitigates Label Noise
Label noise presents a real challenge for supervised learning algorithms. Consequently, mitigating label noise has attracted immense research in recent years. Noise robust losses is one of the more promising approaches for dealing with label noise, as these methods only require changing the loss fun...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Label noise presents a real challenge for supervised learning algorithms.
Consequently, mitigating label noise has attracted immense research in recent
years. Noise robust losses is one of the more promising approaches for dealing
with label noise, as these methods only require changing the loss function and
do not require changing the design of the classifier itself, which can be
expensive in terms of development time. In this work we focus on losses that
use output regularization (such as label smoothing and entropy). Although these
losses perform well in practice, their ability to mitigate label noise lack
mathematical rigor. In this work we aim at closing this gap by showing that
losses, which incorporate an output regularization term, become symmetric as
the regularization coefficient goes to infinity. We argue that the
regularization coefficient can be seen as a hyper-parameter controlling the
symmetricity, and thus, the noise robustness of the loss function. |
---|---|
DOI: | 10.48550/arxiv.2104.12477 |