Diverse part attentive network for video-based person re-identification
•We propose a lightweight attention mechanism to exploit diverse parts of human bodies for addressing visual variations.•We propose an effective framework for video-based person re-identification.•We conduct extensive experiments on three popular benchmarks for demonstrating the effectiveness of our...
Gespeichert in:
Veröffentlicht in: | Pattern recognition letters 2021-09, Vol.149, p.17-23 |
---|---|
Hauptverfasser: | , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | •We propose a lightweight attention mechanism to exploit diverse parts of human bodies for addressing visual variations.•We propose an effective framework for video-based person re-identification.•We conduct extensive experiments on three popular benchmarks for demonstrating the effectiveness of our proposed method.
Attention mechanisms have achieved success in video-based person re-identification (re-ID). However, current global attentions tend to focus on the most salient parts, e.g., clothes, and ignore other subtle but valuable cues, e.g., hair, bag, and shoes. They still do not make full use of valuable information from diverse parts of human bodies. To tackle this issue, we propose a Diverse Part Attentive Network (DPAN) to exploit discriminative and diverse body cues. The framework consists of two modules: spatial diverse part attention and temporal diverse part attention. The spatial module utilizes channel grouping to exploit diverse parts of human bodies including salient and subtle parts. The temporal module aims to learn diverse weights for fusing learned features. Besides, this framework is lightweight, which introduces marginal parameters and computational complexities. Extensive experiments were conducted on three popular benchmarks, i.e. iLIDS-VID, PRID2011 and MARS. Our method achieves competitive performance on these datasets compared with state-of-the-art methods. |
---|---|
ISSN: | 0167-8655 1872-7344 |
DOI: | 10.1016/j.patrec.2021.05.020 |