Deep k-NN for Noisy Labels
Modern machine learning models are often trained on examples with noisy labels that hurt performance and are hard to identify. In this paper, we provide an empirical study showing that a simple $k$-nearest neighbor-based filtering approach on the logit layer of a preliminary model can remove mislabe...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modern machine learning models are often trained on examples with noisy
labels that hurt performance and are hard to identify. In this paper, we
provide an empirical study showing that a simple $k$-nearest neighbor-based
filtering approach on the logit layer of a preliminary model can remove
mislabeled training data and produce more accurate models than many recently
proposed methods. We also provide new statistical guarantees into its efficacy. |
---|---|
DOI: | 10.48550/arxiv.2004.12289 |