LAENet for micro-expression recognition
Micro-expression is an expression that reveals one’s true feelings and can be potentially applied in various domains such as healthcare, safety interrogation, and business negotiation. The micro-expression recognition is thus far being judged manually by psychologists and trained experts, which cons...
Gespeichert in:
Veröffentlicht in: | The Visual computer 2024-02, Vol.40 (2), p.585-599 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Micro-expression is an expression that reveals one’s true feelings and can be potentially applied in various domains such as healthcare, safety interrogation, and business negotiation. The micro-expression recognition is thus far being judged manually by psychologists and trained experts, which consumes a lot of human effort and time. Recently, the development of the deep learning network has proven promising performance in many computer vision-related tasks. Amongst, micro-expression recognition adopts the deep learning methodology to improve the feature learning capability and model generalization. This paper introduces a lightweight apex-based enhanced network that improves by extending one of the state-of-the-art, shallow triple stream three-dimensional CNN. Concretely, the network is first pre-trained with a macro-expression dataset to encounter the small data problem. The features were extracted from CASME II, SMIC, and SAMM datasets. Moreover, thorough recognition results comparison of the datasets are the optical flow-guided features. Besides, an eye masking technique is introduced to reduce noise interference such as eye blinking and glasses reflection issues. The results obtained have an accuracy of 79.19
%
and an F1-score of 75.9
%
. Comprehensive experimentation had been conducted on the composite dataset that consists of is provided by comparing it with recent methods. Detailed qualitative and quantitative results are reported and discussed. |
---|---|
ISSN: | 0178-2789 1432-2315 |
DOI: | 10.1007/s00371-023-02803-3 |