PAFormer: Part Aware Transformer for Person Re-identification
Within the domain of person re-identification (ReID), partial ReID methods are considered mainstream, aiming to measure feature distances through comparisons of body parts between samples. However, in practice, previous methods often lack sufficient awareness of anatomical aspect of body parts, resu...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Within the domain of person re-identification (ReID), partial ReID methods
are considered mainstream, aiming to measure feature distances through
comparisons of body parts between samples. However, in practice, previous
methods often lack sufficient awareness of anatomical aspect of body parts,
resulting in the failure to capture features of the same body parts across
different samples. To address this issue, we introduce \textbf{Part Aware
Transformer (PAFormer)}, a pose estimation based ReID model which can perform
precise part-to-part comparison. In order to inject part awareness to pose
tokens, we introduce learnable parameters called `pose token' which estimate
the correlation between each body part and partial regions of the image.
Notably, at inference phase, PAFormer operates without additional modules
related to body part localization, which is commonly used in previous ReID
methodologies leveraging pose estimation models. Additionally, leveraging the
enhanced awareness of body parts, PAFormer suggests the use of a learning-based
visibility predictor to estimate the degree of occlusion for each body part.
Also, we introduce a teacher forcing technique using ground truth visibility
scores which enables PAFormer to be trained only with visible parts. A set of
extensive experiments show that our method outperforms existing approaches on
well-known ReID benchmark datasets. |
---|---|
DOI: | 10.48550/arxiv.2408.05918 |