Learning Cognitive Features as Complementary for Facial Expression Recognition

Facial expression recognition (FER) has a wide range of applications, including interactive gaming, healthcare, security, and human‐computer interaction systems. Despite the impressive performance of FER based on deep learning methods, it remains challenging in real‐world scenarios due to uncontroll...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:International journal of intelligent systems 2024-01, Vol.2024 (1)
Hauptverfasser: Li, Huihui, Xiao, Xiangling, Liu, Xiaoyong, Wen, Guihua, Liu, Lianqi
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Facial expression recognition (FER) has a wide range of applications, including interactive gaming, healthcare, security, and human‐computer interaction systems. Despite the impressive performance of FER based on deep learning methods, it remains challenging in real‐world scenarios due to uncontrolled factors such as varying lighting conditions, face occlusion, and pose variations. In contrast, humans are able to categorize objects based on both their inherent characteristics and the surrounding environment from a cognitive standpoint, utilizing concepts such as cognitive relativity. Modeling the cognitive relativity laws to learn cognitive features as feature augmentation may improve the performance of deep learning models for FER. Therefore, we propose a cognitive feature learning framework to learn cognitive features as complementary for FER, which consists of Relative Transformation module (AFRT) and Graph Convolutional Network module (AFGCN). AFRT explicitly creates cognitive relative features that reflect the position relationships between the samples based on human cognitive relativity, and AFGCN implicitly learns the interaction features between expressions as feature augmentation to improve the classification performance of FER. Extensive experimental results on three public datasets show the universality and effectiveness of the proposed method.
ISSN:0884-8173
1098-111X
DOI:10.1155/2024/7321175