UNM: A Universal Approach for Noisy Multi-Label Learning

Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on knowledge and data engineering 2024-09, Vol.36 (9), p.4968-4980
Hauptverfasser: Chen, Jia-Yao, Li, Shao-Yuan, Huang, Sheng-Jun, Chen, Songcan, Wang, Lei, Xie, Ming-Kun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Multi-label image classification relies on a large-scale, well-maintained dataset, which may easily be mislabeled due to various subjective reasons. Existing methods for coping with noise usually focus on improving the model robustness in the case of single-label noise. However, compared with noisy single-label learning, noisy multi-label learning is more practical and challenging. To reduce the negative impact of noisy multi-annotations, we propose a universal approach for noisy multi-label learning (UNM). In UNM, we propose the label-wise embedding network which investigates the semantic alignment between label embeddings and their corresponding output features to learn robust feature representations. Meanwhile, mining the co-occurrence of multi-labels is also added to regularize the noisy network predictions. We cyclically change the fitting status of our label-wise embedding network to distinguish the noisy samples and generate pseudo labels for them. As a result, UNM provides an effective way to exploit the label-wise features and semantic label embeddings in noisy scenarios. To verify the generalizability of our method, we also test our method on Partial Multi-label Learning (PML) and Multi-label Learning with Missing Labels (MLML). Extensive experiments on benchmark datasets including Microsoft COCO, Pascal VOC, and Visual Genome explicitly validate the proposed method.
ISSN:1041-4347
1558-2191
DOI:10.1109/TKDE.2024.3373500