A robust similarity based deep siamese convolutional neural network for gait recognition across views
Gait recognition has been considered as the emerging biometric technology for identifying the walking behaviors of humans. The major challenges addressed in this article is significant variation caused by covariate factors such as clothing, carrying conditions and view angle variations will undesira...
Gespeichert in:
Veröffentlicht in: | Computational intelligence 2020-08, Vol.36 (3), p.1290-1319 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Gait recognition has been considered as the emerging biometric technology for identifying the walking behaviors of humans. The major challenges addressed in this article is significant variation caused by covariate factors such as clothing, carrying conditions and view angle variations will undesirably affect the recognition performance of gait. In recent years, deep learning technique has produced a phenomenal performance accuracy on various challenging problems based on classification. Due to an enormous amount of data in the real world, convolutional neural network will approximate complex nonlinear functions in models to develop a generalized deep convolutional neural network (DCNN) architecture for gait recognition. DCNN can handle relatively large multiview datasets with or without using any data augmentation and fine‐tuning techniques. This article proposes a color‐mapped contour gait image as gait feature for addressing the variations caused by the cofactors and gait recognition across views. We have also compared the various edge detection algorithms for gait template generation and chosen the best from among them. The databases considered for our work includes the most widely used CASIA‐B dataset and OULP database. Our experiments show significant improvement in the gait recognition for fixed‐view, crossview, and multiview compared with the recent methodologies. |
---|---|
ISSN: | 0824-7935 1467-8640 |
DOI: | 10.1111/coin.12361 |