IRANet: Identity-relevance aware representation for cloth-changing person re-identification

•The IRANet is proposed to address cloth-changing person re-identification task.•A more reliable cue is introduced by mining the identity information from head.•A human head detection module is designed to localize the human head area.•A head-guided attention module is used to highlight the head emb...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Image and vision computing 2022-01, Vol.117, p.104335, Article 104335
Hauptverfasser: Shi, Wei, Liu, Hong, Liu, Mengyuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•The IRANet is proposed to address cloth-changing person re-identification task.•A more reliable cue is introduced by mining the identity information from head.•A human head detection module is designed to localize the human head area.•A head-guided attention module is used to highlight the head embedding.•The proposed method achieves higher matching rates than the competing methods. Existing person re-identification methods mainly focus on searching the target person across disjoint camera views in a short period of time. With this setting, these methods rely on the assumption that both query and gallery images of the same person have the same clothing. To tackle the challenges of clothing changes over a long duration, this paper proposes an identity-relevance aware neural network (IRANet) for cloth-changing person re-identification. Specifically, a human head detection module is designed to localize the human head part with the help of the human parsing estimation. The detected human head part contains abundant identity information, including facial features and head type. Then, raw person images in conjunction with detected head areas are respectively transformed into feature representation with the feed-forward network. The learned features of raw person images contain more attributes of global context, meanwhile the learned features of head areas contain more identity-relevance attributes. Finally, a head-guided attention module is employed to guide the global features learned by raw person images to focus more on the identity-relevance head areas. The proposed method achieves mAP accuracy of 25.4% on the Celeb-reID-light dataset, 19.0% on the Celeb-reID dataset, and 53.0% (Cloth-changing setting) on the PRCC dataset, which shows the superiority of our approach for the cloth-changing person re-identification task.
ISSN:0262-8856
1872-8138
DOI:10.1016/j.imavis.2021.104335