Deep cross feature adaptive network for facial emotion classification
In this paper, we propose a novel CNN-based model named as Deep Cross Feature Adaptive Network (DCFA-CNN) for facial expression recognition. The proposed DCFA-CNN model holds two major components: shape feature (ShFeat) block and texture feature (TexFeat) block, respectively. The ShFeat block is res...
Gespeichert in:
Veröffentlicht in: | Signal, image and video processing image and video processing, 2022-03, Vol.16 (2), p.369-376 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this paper, we propose a novel CNN-based model named as Deep Cross Feature Adaptive Network (DCFA-CNN) for facial expression recognition. The proposed DCFA-CNN model holds two major components: shape feature (ShFeat) block and texture feature (TexFeat) block, respectively. The ShFeat block is responsible to extract high-level responses, which leads to discriminate features from different expressive regions, while TexFeat block leads to hold micro/minute variations which defines structural differences in the expressive regions. Moreover, DCFA-CNN embedded a two-branch cross-relationship to collect information of ShFeat and TexFeat block. These different responses boost discriminability of the network by incorporating complementary features. The effectiveness of the proposed DCFA-CNN is evaluated extensively with four datasets: CK+, MUG, ISED and OULU-CASIA, over single-domain subject independent and cross-domain ethnicity independent experimental setups. The experimental results show a significant improvement of 21.8%, 21.55% and 6.43%, 17.9% as compared with MobileNet for 6- and 7-classes over ISED and OULU-CASIA. The extensive ablation experiments have done to validate the role of each module in DCFA-CNN framework. |
---|---|
ISSN: | 1863-1703 1863-1711 |
DOI: | 10.1007/s11760-021-01941-2 |