Partial Matching of Facial Expression Sequence Using Over-Complete Transition Dictionary for Emotion Recognition

Facial dynamics contain useful information for facial expression recognition (FER). However, exploiting dynamics in FER is challenging. This is mainly due to a variety of expression transitions. For example, video sequences belonging to a same emotion class may have different characteristics in tran...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on affective computing 2016-10, Vol.7 (4), p.389-408
Hauptverfasser: Lee, Seung Ho, Ro, Yong Man
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Facial dynamics contain useful information for facial expression recognition (FER). However, exploiting dynamics in FER is challenging. This is mainly due to a variety of expression transitions. For example, video sequences belonging to a same emotion class may have different characteristics in transition duration and/or transition type (e.g., onset versus offset). The temporal mismatches between query and training video sequences could degrade the FER. This paper proposes a new partial matching framework that aims to overcome the temporal mismatch of expression transition. During the training stage, we construct an over-complete transition dictionary where many possible partial expression transitions are contained. During the test stage, we extract a number of partial expression transitions from a query video sequence. Each partial expression transition is analyzed individually. This increases the possibility of matching a partial expression transition in the query video sequence against the partial expression transitions in the over-complete transition dictionary. To make a partial matching subject-independent and robust to the temporal mismatch, each partial expression transition is defined as facial shape displacement between a pair of face clusters. Experimental results show that the proposed method is robust to variations of transition duration and transition type in subject-independent recognition.
ISSN:1949-3045
1949-3045
DOI:10.1109/TAFFC.2015.2496320