Person Re-identification with pose variation aware data augmentation

Person re-identification (Re-ID) aims to match a person of interest across multiple non-overlapping camera views. This is a challenging task, partly because a person captured in surveillance video often undergoes intense pose variations. Consequently, differences in their appearance are typically ob...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2022-07, Vol.34 (14), p.11817-11830
Hauptverfasser: Zhang, Lei, Jiang, Na, Diao, Qishuai, Zhou, Zhong, Wu, Wei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Person re-identification (Re-ID) aims to match a person of interest across multiple non-overlapping camera views. This is a challenging task, partly because a person captured in surveillance video often undergoes intense pose variations. Consequently, differences in their appearance are typically obvious. In this paper, we propose a pose variation aware data augmentation ( PA 4 ) method, which is composed of a pose transfer generative adversarial network (PTGAN) and person re-identification with improved hard example mining (Pre-HEM). Specifically, PTGAN introduces a similarity measurement module to synthesize realistic person images that are conditional on the pose, and with the original images, form an augmented training dataset. Pre-HEM presents a novel method of using the pose-transferred images with the learned pose transfer model for person Re-ID. It replaces the invalid samples that are caused by pose variations and constrains the proportion of the pose-transferred samples in each mini-batch. We conduct extensive comparative evaluations to demonstrate the advantages and superiority of our proposed method over state-of-the-art approaches on Market-1501, DukeMTMC-reID, and CUHK03 dataset.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-022-07071-1