Clothes-Eraser: clothing-aware controllable disentanglement for clothes-changing person re-identification
This paper conducts research on the problem of person re-identification in the context of urbanization and population growth, which holds significant application prospects in the field of computer vision. Traditional methods for the task of clothes-changing person re-identification face challenges s...
Gespeichert in:
Veröffentlicht in: | Signal, image and video processing image and video processing, 2024-07, Vol.18 (5), p.4337-4348 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper conducts research on the problem of person re-identification in the context of urbanization and population growth, which holds significant application prospects in the field of computer vision. Traditional methods for the task of clothes-changing person re-identification face challenges such as appearance variations, occlusions, and lighting conditions. Some existing approaches attempt to induce the model to learn clothing-irrelevant features by altering the colors of pedestrian clothing in the dataset or generating pedestrian images with different outfits. However, these methods inevitably cause damage to the original RGB images, resulting in the loss of a substantial amount of valuable information. To address this issue, this paper first proposes a controllable disentanglement method that accurately extracts clothing-irrelevant disentangled semantic information features using pedestrian semantic segmentation maps. Subsequently, the proposed “clothing-eraser method” is employed to fuse the disentangled semantic features with processed RGB images to achieve clothing information erasure. We introduce two constraints in our approach. Firstly, we employ a discriminative identity feature loss (
L
DFL
) to encourage samples belonging to the same identity to be closer to each other, while pushing samples from different identities apart. Secondly, we utilize a clothing-irrelevant loss (
L
CIL
) on RGB images to mitigate the interference of different clothing styles on identity features. Experimental validation on two benchmark datasets demonstrates that the proposed method outperforms the current state-of-the-art approaches in the task of clothes-changing person re-identification. Specifically, our method achieved Rank-1 accuracy rates of 64.1% and 85.7% in clothing change scenarios on the PRCC and VC-Clothes datasets, respectively. |
---|---|
ISSN: | 1863-1703 1863-1711 |
DOI: | 10.1007/s11760-024-03076-6 |