Eye-Strip based Person Identification based on Non-Subsampled Contourlet Transform

Many state-of-the-art face recognition systems fail to identify a person when most portions of the face are occluded. This paper addresses an intriguing problem of face recognition only with eye-strip samples as testing images and full images or again eye-strips as database images. Non-sub-sampled C...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of computer applications 2015-01, Vol.121 (12), p.14-20
Hauptverfasser: Patil, Hemprasad Y, Kothari, Ashwin G, Bhurchandi, Kishor M
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Many state-of-the-art face recognition systems fail to identify a person when most portions of the face are occluded. This paper addresses an intriguing problem of face recognition only with eye-strip samples as testing images and full images or again eye-strips as database images. Non-sub-sampled Contourlet transform is a distinguished algorithm for extracting soft and smooth contour-like edges without any loss of information. It also produces eminent features due to its localization and directionality preserving abilities and has strong resemblance with abilities of human visual cortex to extract features. This does not require any boosting of sub-band coefficients. We have proposed a novel approach that adds all the sub-bands at each pyramidal level of non-subsampled contourlet transform to achieve a hybrid high frequency composite sub-band and minimize the dimensionality followed by feature extraction using Weber Local Descriptor (WLD) on the hybrid high frequency sub-band. Linear Discriminant Analysis is further used for insignificant feature reduction followed by nearest neighbor matching using Euclidean distance measure. JAFFE, Yale and Faces94 Essex university databases are used for experimentation and benchmarking. The analysis indicates that the proposed approach yields a robust feature vector and very good recognition rates using only eye-strip features.
ISSN:0975-8887
0975-8887
DOI:10.5120/21591-4681