Gender Classification From NIR Images by Using Quadrature Encoding Filters of the Most Relevant Features

In the past few years, accuracy in determining gender from iris images has increased significantly, approaching levels that make novel applications of this biometric technology feasible. In this paper, we report the gender classification rate by using a 2-D Quadrature Quaternionic filter, and a sele...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.29114-29127
Hauptverfasser: Tapia, Juan E., Perez, Claudio A.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In the past few years, accuracy in determining gender from iris images has increased significantly, approaching levels that make novel applications of this biometric technology feasible. In this paper, we report the gender classification rate by using a 2-D Quadrature Quaternionic filter, and a selection of the most relevant features from the normalized iris images. We encoded the phase information of the normalized images using 4 bits per pixel with a 2-D-Gabor filter and selected the best bits from the four resulting images (1 real and 3 imaginary) instead of the 1-D log-Gabor traditional encoding method. We used traditional hand-crafted and automatic methods to select and extract the most relevant features from the whole iris images, blocks from images, and pixel features and compared how effective these methods were in separating features from female and male iris images. Selecting iris blocks and features reduce the computational time and, at a basic science level, is of great value in understanding what information features, as well as pixels from the iris, can be extracted to classify gender. The Quaternionic-Code with the complementary feature selection method achieved the best results on the GFI-UND database with 93.45% for the left iris and 95.45% for the right iris, both with 2400 selected features. We compared our results and found them to be advantageous to the best results previously published, and also to those obtained using convolutional neural network feature extraction.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2902470