Tackling biased complementary label learning with large margin
Complementary Label Learning (CLL) is a typical weakly supervised learning protocol, where each instance is associated with one complementary label to specify a class that the instance does not belong to. Current CLL approaches assume that complementary labels are uniformly sampled from all non-grou...
Gespeichert in:
Veröffentlicht in: | Information sciences 2025-01, Vol.687, p.121400, Article 121400 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Complementary Label Learning (CLL) is a typical weakly supervised learning protocol, where each instance is associated with one complementary label to specify a class that the instance does not belong to. Current CLL approaches assume that complementary labels are uniformly sampled from all non-ground-truth labels, so as to implicitly and locally share complementary labels by solely reducing the logit of complementary label in one way or another. In this paper, we point out that, when the uniform assumption does not hold, existing CLL methods are weakened their ability to share complementary labels and fail in creating classifiers with large logit margin (LM), resulting in a significant performance drop. To address these issues, we instead present complementary logit margin (CLM) and empirically prove that increasing CLM contributes to the share of complementary labels under the biased CLL setting. Accordingly, we propose a surrogate complementary one-versus-rest loss (COVR) and demonstrate that optimization on COVR can effectively increase CLM with both theoretical and empirical evidences. Extensive experiments verify that the proposed COVR exhibits substantial improvement for both the biased CLL and even a more practical CLL setting: instance-dependent complementary label learning.
•We consider biased complementary label learning (BCLL) and explore the reason of poor performance under the biased setting.•We propose complementary logit margin (CLM) to replace logit margin to enhance label sharing in BCLL, with a novel and surrogate COVR loss.•We technically validate the theoretical effectiveness of the proposed variant of COVR in increasing CLM and improving CLL performance.•Extensive experiments confirm the superiority of the proposed COVR under biased and more challenging instance-dependent CLL settings. |
---|---|
ISSN: | 0020-0255 |
DOI: | 10.1016/j.ins.2024.121400 |