Capturing Feature and Label Relations Simultaneously for Multiple Facial Action Unit Recognition

Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependenci...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on affective computing 2019-07, Vol.10 (3), p.348-359
Hauptverfasser: Wang, Shangfei, Wu, Shan, Peng, Guozhu, Ji, Qiang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Although both feature dependencies and label dependencies are crucial for facial action unit (AU) recognition, little work addresses them simultaneously till now. In this paper, we propose a 4-layer Restricted Boltzmann Machine (RBM) to simultaneously capture feature level and label level dependencies to recognize multiple AUs. The middle hidden layer of the 4-layer RBM model captures dependencies among image features for multiple AUs, while the top latent units capture the high-order semantic dependencies among AU labels. Furthermore, we extend the proposed 4-layer RBM for facial expression-augmented AU recognition, since AU relations are influenced by expressions. By introducing facial expression nodes in the middle visible layer, facial expressions, which are only required during training, facilitate the estimation of both feature dependencies and label dependencies among AUs. Efficient learning and inference algorithms for the extended model are also developed. Experimental results on three benchmark databases, i.e., the CK+ database, the DISFA database and the SEMAINE database, demonstrate that the proposed approaches can successfully capture complex AU relationships from features and labels jointly, and the expression labels available only during training are benefit for AU recognition during testing for both posed and spontaneous facial expressions.
ISSN:1949-3045
1949-3045
DOI:10.1109/TAFFC.2017.2737540