Global-Local Discriminative Representation Learning Network for Viewpoint-Aware Vehicle Re-identification in Intelligent Transportation
Vehicle re-identification (Re-ID) that aims at matching vehicles across multiple non-overlapping cameras is prevalently recognized as an important application of computer vision in intelligent transportation. One of the major challenges is to extract discriminative features that are resistant to vie...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on instrumentation and measurement 2023-01, Vol.72, p.1-1 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Vehicle re-identification (Re-ID) that aims at matching vehicles across multiple non-overlapping cameras is prevalently recognized as an important application of computer vision in intelligent transportation. One of the major challenges is to extract discriminative features that are resistant to viewpoint variations. To address this problem, this paper proposes a novel vehicle Re-ID model from the perspectives of effective feature fusion and adaptive part attention. Firstly, we put forward a channel attention-based feature fusion (CAFF) module that can learn the significance of features from different layers of the backbone network. In such a way, our model can leverage complementary features for vehicle Re-ID. Then, to address the viewpoint variation problem, we present an adaptive part attention (APA) module that evaluates the significance of local vehicle parts based on the visible areas and the extracted features. By doing so, our model can concentrate more on the vehicle parts with rich discriminative information while paying less attention to the parts with limited distinctive capability. Finally, the whole model is trained by simultaneous classification and metric learning. Experiments on two large-scale vehicle Re-ID datasets are carried out to evaluate the proposed model. The results show that our model achieves competing performance compared with other state-of-the-art approaches. |
---|---|
ISSN: | 0018-9456 1557-9662 |
DOI: | 10.1109/TIM.2023.3295011 |