Robust gait recognition via discriminative set matching

► We propose a framework for multiview gait recognition across varying views and walking conditions. ► Our approach is computationally inexpensive and suitable for real applications. ► Our method can perform robust even with limited number of training samples of each subject. ► Extensive experimenta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of visual communication and image representation 2013-05, Vol.24 (4), p.439-447
Hauptverfasser: Liu, Nini, Lu, Jiwen, Yang, Gao, Tan, Yap-Peng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:► We propose a framework for multiview gait recognition across varying views and walking conditions. ► Our approach is computationally inexpensive and suitable for real applications. ► Our method can perform robust even with limited number of training samples of each subject. ► Extensive experimental results are presented to demonstrate the effectiveness of the proposed framework. In this paper, we propose a framework for gait recognition across varying views and walking conditions based on gait sequences collected from multiple viewpoints. Different from most existing view-dependent gait recognition systems, we devise a new Multiview Subspace Representation (MSR) method which considers gait sequences collected from different views of the same subject as a feature set and extracts a linear subspace to describe the feature set. Subspace-based feature representation methods measure the variances among samples, and can handle certain intra-subject variations. To better exploit the discriminative information from these subspaces for recognition, we further propose a marginal canonical correlation analysis (MCCA) method which maximizes the margins of interclass subspaces within a neighborhood. Experimental results on a widely used multiview gait database are presented to demonstrate the effectiveness of the proposed framework.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2013.02.002