Noise-Resistance Learning via Multi-Granularity Consistency for Unsupervised Domain Adaptive Person Re-Identification
Unsupervised domain adaptive person re-identification aims at adapting the re-identification model trained on a labeled source domain to an unlabeled target domain. The mainstream pipeline alternates between clustering-based pseudo-label prediction and representation learning, but the imperfect inte...
Gespeichert in:
Veröffentlicht in: | ACM transactions on multimedia computing communications and applications 2024-11 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Unsupervised domain adaptive person re-identification aims at adapting the re-identification model trained on a labeled source domain to an unlabeled target domain. The mainstream pipeline alternates between clustering-based pseudo-label prediction and representation learning, but the imperfect interaction between these steps generates noisy pseudo labels that diminish the model’s effectiveness. Previous methods reduce noisy pseudo labels impact by assessing consistency only at a single granularity level, overlooking multi-level confidence analysis for better feature representation. To address the issue, we propose a novel Multi-Granularity Consistency Network (MGCN) to perform the noise-resistance learning across different granularity consistency perspectives, including prototype-wise consistency, triplet-wise consistency and list-wise consistency, to suppress the contribution of noisy samples simultaneously. Specifically, the prototype-wise consistency leverages the prototypical output affinity between teacher and student networks to evaluate the reliability of pseudo label of a target sample, thus reducing their negative impact on identity classification loss. Triplet-wise consistency focuses on the triplet distance discrepancies between the teacher and student networks to retain the reliable and informative samples that satisfy triplet distance constraint in the triplet loss, thereby facilitating more effective model training and improved performance in the target domain. Furthermore, the list-wise consistency uses accurate list-wise similarity rankings from the teacher’s memory bank to select more dependable neighboring samples in the student’s memory bank, pulling these closer in feature space to alleviate the detrimental effects of noisy labels in contrastive loss. Based on the multi-granularity consistency, MGCN evaluates the credibility of pseudo labels and adjusts their impact across three re-ID losses for effective domain adaptation. Experimental results demonstrate that our proposed method achieves significant improvements over the existing methods on multiple benchmarks. |
---|---|
ISSN: | 1551-6857 1551-6865 |
DOI: | 10.1145/3702328 |