DotSCN: Group Re-Identification via Domain-Transferred Single and Couple Representation Learning

Group re-identification (G-ReID) is an important yet less-studied task. Its challenges not only lie in appearance changes of individuals, but also involve group layout and membership changes. To address these issues, the key task of G-ReID is to learn group representations robust to such changes. Ne...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on circuits and systems for video technology 2021-07, Vol.31 (7), p.2739-2750
Hauptverfasser: Huang, Ziling, Wang, Zheng, Tsai, Chung-Chi, Satoh, Shin'ichi, Lin, Chia-Wen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Group re-identification (G-ReID) is an important yet less-studied task. Its challenges not only lie in appearance changes of individuals, but also involve group layout and membership changes. To address these issues, the key task of G-ReID is to learn group representations robust to such changes. Nevertheless, unlike ReID tasks, there still lacks comprehensive publicly available G-ReID datasets, making it difficult to learn effective representations using deep learning models. In this article, we propose a Domain-Transferred Single and Couple Representation Learning Network (DotSCN). Its merits are two aspects: 1) Owing to the lack of labelled training samples for G-ReID, existing G-ReID methods mainly rely on unsatisfactory hand-crafted features. To gain the power of deep learning models in representation learning, we first treat a group as a collection of multiple individuals and propose transferring the representation of individuals learned from an existing labeled ReID dataset to a target G-ReID domain without a suitable training dataset. 2) Taking into account the neighborhood relationship in a group, we further propose learning a novel couple representation between two group members, that achieves better discriminative power in G-ReID tasks. In addition, we propose a weight learning method to adaptively fuse the domain-transferred individual and couple representations based on an L-shape prior. Extensive experimental results demonstrate the effectiveness of our approach that significantly outperforms state-of-the-art methods by 11.7% CMC-1 on the Road Group dataset and by 39.0% CMC-1 on the DukeMCMT dataset.
ISSN:1051-8215
1558-2205
DOI:10.1109/TCSVT.2020.3031303