Facial Action Recognition Combining Heterogeneous Features via Multikernel Learning

This paper presents our response to the first international challenge on facial emotion recognition and analysis. We propose to combine different types of features to automatically detect action units (AUs) in facial images. We use one multikernel support vector machine (SVM) for each AU we want to...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on systems, man and cybernetics. Part B, Cybernetics man and cybernetics. Part B, Cybernetics, 2012-08, Vol.42 (4), p.993-1005
Hauptverfasser: Senechal, T., Rapp, V., Salam, H., Seguier, R., Bailly, K., Prevost, L.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:This paper presents our response to the first international challenge on facial emotion recognition and analysis. We propose to combine different types of features to automatically detect action units (AUs) in facial images. We use one multikernel support vector machine (SVM) for each AU we want to detect. The first kernel matrix is computed using local Gabor binary pattern histograms and a histogram intersection kernel. The second kernel matrix is computed from active appearance model coefficients and a radial basis function kernel. During the training step, we combine these two types of features using the recently proposed SimpleMKL algorithm. SVM outputs are then averaged to exploit temporal information in the sequence. To evaluate our system, we perform deep experimentation on several key issues: influence of features and kernel function in histogram-based SVM approaches, influence of spatially independent information versus geometric local appearance information and benefits of combining both, sensitivity to training data, and interest of temporal context adaptation. We also compare our results with those of the other participants and try to explain why our method had the best performance during the facial expression recognition and analysis challenge.
ISSN:1083-4419
1941-0492
DOI:10.1109/TSMCB.2012.2193567