Label correction using contrastive prototypical classifier for noisy label learning

Deep neural networks typically require a large number of accurately labeled images for training with cross-entropy loss, and often overfit noisy labels. Contrastive learning has proven impressive in noisy label learning because it can learn discrimination representations. However, the weak correlati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Information sciences 2023-11, Vol.649, p.119647, Article 119647
Hauptverfasser: Xu, Chaoyang, Lin, Renjie, Cai, Jinyu, Wang, Shiping
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep neural networks typically require a large number of accurately labeled images for training with cross-entropy loss, and often overfit noisy labels. Contrastive learning has proven impressive in noisy label learning because it can learn discrimination representations. However, the weak correlation between the samples and their semantic class, which ignores the correlation between the instances and labels, as well as the instance semantic divergence with the same label, may inevitably lead to class collisions, hampering the label correction. To address these problems, this study proposes a noisy label learning framework that performs label correction and constructs a contrastive prototypical classifier cooperatively. In particular, the prototypical classifier maximizes the distance between the instances and class prototypes to improve the intraclass compactness using contrastive prototypical loss. Furthermore, we provide a theoretical guarantee that the contrastive prototypical loss has a smaller Lipschitz constant and boosts the robustness. Motivated by the theoretical analysis, this framework performs label correction using the prediction of a contrastive prototypical classifier. Extensive experiments demonstrate that the proposed framework achieves superior classification accuracy on synthetic datasets with various noise patterns and levels.
ISSN:0020-0255
1872-6291
DOI:10.1016/j.ins.2023.119647