Learning Dynamic Relationships for Facial Expression Recognition Based on Graph Convolutional Network

Facial action units (AUs) analysis plays an important role in facial expression recognition (FER). Existing deep spectral convolutional networks (DSCNs) have made encouraging performance for FER based on a set of facial local regions and a predefined graph structure. However, those regions do not ha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on image processing 2021-01, Vol.30, p.1-1
Hauptverfasser: Jin, Xing, Lai, Zhihui, Jin, Zhong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Facial action units (AUs) analysis plays an important role in facial expression recognition (FER). Existing deep spectral convolutional networks (DSCNs) have made encouraging performance for FER based on a set of facial local regions and a predefined graph structure. However, those regions do not have close relationships to AUs, and DSCNs cannot model the dynamic spatial dependencies of these regions for estimating different facial expressions. To tackle these issues, we propose a novel Double Dynamic Relationships Graph Convolutional Network (DDRGCN) to learn the strength of the edges in the facial graph by a trainable weighted adjacency matrix. We construct facial graph data by 20 regions of interest (ROIs) guided by different facial AUs. Furthermore, we devise an efficient graph convolutional network in which the inherent dependencies of vertices in the facial graph can be learned automatically during network training. Notably, the proposed model only has 110K parameters and 0.48MB model size, which is significantly less than most existing methods. Experiments on four widely-used FER datasets demonstrate that the proposed dynamic relationships graph network achieves superior results compared to existing light-weight networks, not just in terms of accuracy but also model size and speed.
ISSN:1057-7149
1941-0042
DOI:10.1109/TIP.2021.3101820