Highly effective end-to-end single-to-multichannel feature fusion and ensemble classification to decode emotional secretes from small-scale spontaneous facial micro-expressions
Facial Micro-Expression (ME) is one of the pre-dominating non-verbal clues to demystify the true emotional states that people try to conceal cautiously. But emotion recognition from spontaneous ME images limits the high accuracy due to short duration and low intensity, lack of sufficient samples and...
Gespeichert in:
Veröffentlicht in: | Journal of King Saud University. Computer and information sciences 2023-09, Vol.35 (8), p.101653, Article 101653 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Facial Micro-Expression (ME) is one of the pre-dominating non-verbal clues to demystify the true emotional states that people try to conceal cautiously. But emotion recognition from spontaneous ME images limits the high accuracy due to short duration and low intensity, lack of sufficient samples and consistencies among publicly available ME datasets. In this study, we have proposed two highly effective, lightweight, and generalized single-channel DLRRF-MER, and multi-channel DLH-3C-FUSION fusion models inspired by deep dense convolutional models and texture-based feature descriptors Local Binary Pattern (LBP) and Histogram of Oriented Gradients (HOG) to recognize ME from apex frame by constructing a composite dataset from five publicly available ME datasets CASME, CASMEII, CAS(ME)2, SAMM and MMEW. Pre-training has been done on a new composition of five facial macro expressions datasets CK+, MUGFE, OuluCasia, SFEW, and RAF-DB. The proposed models are fine-tuned on the target ME dataset rigorously with Stratified 5-Fold and 10-Fold, Leave-One-Subject-Out, and Leave-One-Dataset-Out(LODO) cross-validations(CV). In all evaluations, our proposed algorithms show remarkable improvement in effectiveness which surpasses the state-of-the-art accuracies and results in higher generalization capacity. |
---|---|
ISSN: | 1319-1578 2213-1248 |
DOI: | 10.1016/j.jksuci.2023.101653 |