S ^-Net:Semantic and Saliency Attention Network for Person Re-Identification

Person re-identification is still a challenging task when moving objects or another person occludes the probe person. Mainstream methods based on even partitioning apply an off-the-shelf human semantic parsing to highlight the non-collusion part. In this paper, we apply an attention branch to learn...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on multimedia 2023, Vol.25, p.4387-4399
Hauptverfasser: Ren, Xuena, Zhang, Dongming, Bao, Xiuguo, Zhang, Yongdong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Person re-identification is still a challenging task when moving objects or another person occludes the probe person. Mainstream methods based on even partitioning apply an off-the-shelf human semantic parsing to highlight the non-collusion part. In this paper, we apply an attention branch to learn the human semantic partition to avoid misalignment introduced by even partitioning. In detail, we propose a semantic attention branch to learn 5 human semantic maps. We also note that some accessories or belongings, such as a hat, bag, may provide more informative clues to improve the person Re-ID. Human semantic parsing, however, usually treats non-human parts as distractions and discards them. To fetch the missing clues, we design a branch to capture the salient non-human parts. Finally, we merge the semantic and saliency attention to build an end-to-end network, named as S^{2}-Net. Specifically, to further improve Re-ID, we develop a trade-off weighting scheme between semantic and saliency attention and set the right weight with the actual scene. The extensive experiments show that S^{2}-Net gets the competitive performance. S^{2}-Net achieves 87.4% mAP on Market1501 and obtains 79.3%/56.1% rank-1/mAP on MSMT17 without semantic supervision. The source codes are available at https://github.com/upgirlnana/S2Net .
ISSN:1520-9210
1941-0077
DOI:10.1109/TMM.2022.3174768