An efficient facial emotion recognition system using novel deep learning neural network-regression activation classifier
In the computer vision field, FER encompasses a significant place. It is being studied for a long period, and in recent decades, it has attained progress, but all is in vain, since, recognizing facial expression with high accuracy is still hard due to disparate facial expressions. To beat such diffi...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2021-05, Vol.80 (12), p.17543-17568 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In the computer vision field, FER encompasses a significant place. It is being studied for a long period, and in recent decades, it has attained progress, but all is in vain, since, recognizing facial expression with high accuracy is still hard due to disparate facial expressions. To beat such difficulties, an efficient
Facial Emotion Recognitions
(FER) is proposed by utilizing a novel Deep Learning Neural Network-regression activation (DR) classifier. The proposed method has six phases, namely, pre-processing, facial point extraction, segmentation, feature extraction, feature selection, and classification. Initially the input image has been pre-processed using Gamma-HE technique and then facial points are extracted using Pyramid Histogram of Oriented Gradients (PHOG) based Supervised Descent (SMD) Method. The facial parts are segmented using Viola-Jones Algorithm (VJA) and then Local Tetra Pattern (LTrP), cluster shade, Inverse Divergent Moment (IDM), Local homogeneity, optimum probability, cluster prominence, dissimilarity, autocorrelation, and contrast features have been extracted. Modified Monarch Butterfly Optimization (MMBO) algorithm has been used to select necessary features from the extracted features. From the extracted facial points, the DR classifier classifies the emotions of the particular input image. A ‘2’ datasets were taken for analyzing the proposed system’s performance. Centred on the CK+ database, the proposed work attains 0.9885-accuracy, and centred on the JAFFE database, it has 0.9727-accuracy. Also, the investigational results proved that the proposed work trounces the existing systems centred on statistical metrics. |
---|---|
ISSN: | 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-021-10547-2 |