Text-Guided Prototype Generation for Occluded Person Re-Identification

Occluded person re-identification (ReID) focuses on identifying persons who are partially occluded, especially in multi-camera scenarios. The majority of methods employ the background to make artificial occlusions. However, simple artificial occlusions could not effectively simulate real-world occlu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE signal processing letters 2024, Vol.31, p.2350-2354
Hauptverfasser: Jiang, Min, Liu, Xinyu, Kong, Jun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Occluded person re-identification (ReID) focuses on identifying persons who are partially occluded, especially in multi-camera scenarios. The majority of methods employ the background to make artificial occlusions. However, simple artificial occlusions could not effectively simulate real-world occluded scenarios, due to its lack of semantic information and its limitation in disrupting the model's attention. In this paper, we present the Text-Guided Prototype Generation (TGPG) for occluded person ReID. On the one hand, to fully employ the potential of text as priori information, the Mask Prototype Generation (MPG) strategy is presented to generate the prototypes that could capture attention in the pretrained model, similar to the realistic occlusions. On the other hand, to create a relationship between holistic person features and occluded person features, the Intra-modality Spatial Consistency (ISC) loss is introduced, enhancing the consistency and representativeness of the generated mask prototypes. Comprehensive experiments conducted on the Occluded-Duke and Occluded-ReID datasets confirm our method's superiority over state-of-the-art approaches.
ISSN:1070-9908
1558-2361
DOI:10.1109/LSP.2024.3456007