An efficient deep learning-assisted person re-identification solution for intelligent video surveillance in smart cities
Innovations on the Internet of Everything (IoE) enabled systems are driving a change in the settings where we interact in smart units, recognized globally as smart city environments. However, intelligent video-surveillance systems are critical to increasing the security of these smart cities. More p...
Gespeichert in:
Veröffentlicht in: | Frontiers of Computer Science 2023-08, Vol.17 (4), p.174329, Article 174329 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Innovations on the Internet of Everything (IoE) enabled systems are driving a change in the settings where we interact in smart units, recognized globally as smart city environments. However, intelligent video-surveillance systems are critical to increasing the security of these smart cities. More precisely, in today’s world of smart video surveillance, person re-identification (Re-ID) has gained increased consideration by researchers. Various researchers have designed deep learning-based algorithms for person Re-ID because they have achieved substantial breakthroughs in computer vision problems. In this line of research, we designed an adaptive feature refinement-based deep learning architecture to conduct person Re-ID. In the proposed architecture, the inter-channel and inter-spatial relationship of features between the images of the same individual taken from nonidentical camera viewpoints are focused on learning spatial and channel attention. In addition, the spatial pyramid pooling layer is inserted to extract the multiscale and fixed-dimension feature vectors irrespective of the size of the feature maps. Furthermore, the model’s effectiveness is validated on the CUHK01 and CUHK02 datasets. When compared with existing approaches, the approach presented in this paper achieves encouraging Rank 1 and 5 scores of 24.6% and 54.8%, respectively. |
---|---|
ISSN: | 2095-2228 2095-2236 |
DOI: | 10.1007/s11704-022-2050-4 |