Triaxial Slicing for 3-D Face Recognition From Adapted Rotational Invariants Spatial Moments and Minimal Keypoints Dependence
This letter presents a multiple slicing model for threedimensional (3-D) images of human face, using the frontal, sagittal, and transverse orthogonal planes. The definition of the segments depends on just one key point, the nose tip, which makes it simple and independent of the detection of several...
Gespeichert in:
Veröffentlicht in: | IEEE robotics and automation letters 2018-10, Vol.3 (4), p.3513-3520 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This letter presents a multiple slicing model for threedimensional (3-D) images of human face, using the frontal, sagittal, and transverse orthogonal planes. The definition of the segments depends on just one key point, the nose tip, which makes it simple and independent of the detection of several key points. For facial recognition, attributes based on adapted 2-D spatial moments of Hu and 3-D spatial invariant rotation moments are extracted from each segment. Tests with the proposed model using the Bosphorus Database for neutral vs nonneutral ROC I experiment, applying linear discriminant analysis as classifier and more than one sample for training, achieved 98.7% of verification rate at 0.1% of false acceptance rate. By using the support vector machine as classifier the rank1 experiment recognition rates of 99% and 95.4% have been achieved for a neutral vs neutral and for a neutral vs nonneutral, respectively. These results approach the state-of-the-art using Bosphorus Database and even surpasses it when anger and disgust expressions are evaluated. In addition, we also evaluate the generalization of our method using the FRGC v2.0 database and achieve competitive results, making the technique promising, especially for its simplicity. |
---|---|
ISSN: | 2377-3766 2377-3766 |
DOI: | 10.1109/LRA.2018.2854295 |