Robust Fine-Grained Learning for Cloth-Changing Person Re-Identification

Cloth-changing Person Re-Identification (CC-ReID) poses a significant challenge in tracking pedestrians across cameras while accounting for changes in clothing appearance. Despite recent progress in CC-ReID, existing methods predominantly focus on learning the unique biological features of pedestria...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Mathematics (Basel) 2025-01, Vol.13 (3), p.429
Hauptverfasser: Yin, Qingze, Ding, Guodong, Zhang, Tongpo, Gong, Yumei
Format: Artikel
Sprache:eng
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Cloth-changing Person Re-Identification (CC-ReID) poses a significant challenge in tracking pedestrians across cameras while accounting for changes in clothing appearance. Despite recent progress in CC-ReID, existing methods predominantly focus on learning the unique biological features of pedestrians, often overlooking constraints that promote the learning of cloth-agnostic features. Addressing this limitation, we propose a Robust Fine-grained Learning Network (RFLNet) to effectively learn robust cloth-agnostic features by leveraging fine-grained semantic constraints. Specifically, we introduce a four-body-part attention module to enhance the learning of detailed pedestrian semantic features. To further strengthen the model’s robustness to clothing variations, we employ a random erasing algorithm, encouraging the network to concentrate on cloth-irrelevant attributes. Additionally, we design a fine-grained semantic loss to guide the model in learning identity-related, detailed semantic features, thereby improving its focus on cloth-agnostic regions. Comprehensive experiments on widely used CC-ReID benchmarks demonstrate the effectiveness of RFLNet. Our method achieves state-of-the-art performance, including a 0.7% increase in mAP on PRCC and a 1.6% improvement in rank-1 accuracy on DeepChange.
ISSN:2227-7390
2227-7390
DOI:10.3390/math13030429