Attend to Where and When: Cascaded Attention Network for Facial Expression Recognition
Recognizing human expression in videos is a challenging task due to dynamic changes in facial actions and diverse visual appearances. The key to design a reliable video-based expression recognition system is to extract robust spatial features and make full use of temporal modality characteristics. I...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on emerging topics in computational intelligence 2022-06, Vol.6 (3), p.580-592 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recognizing human expression in videos is a challenging task due to dynamic changes in facial actions and diverse visual appearances. The key to design a reliable video-based expression recognition system is to extract robust spatial features and make full use of temporal modality characteristics. In this paper, we present a novel network architecture called Cascaded Attention Network (CAN) which is a cascaded spatiotemporal model incorporating with both spatial and temporal attention, tailored to video-level facial expression recognition. The cascaded fundamental model consists of a transfer convolutional network and Bidirectional Long Short-Term Memory (BiLSTM) network. Spatial attention is designed from the facial landmarks since facial expressions depend on the actions of key regions (eyebrows, eyes, nose, and mouth) on the face. Focusing on these key regions can help to decrease the effect of person-specific attributes. Meanwhile, the temporal attention is applied to automatically select the peak of expressions and aggregate the video-level representation. Our proposed CAN achieves the state-of-the-art performance on the three most widely used facial expression datasets: CK+ (99.03%), Oulu-CASIA (88.33%), and MMI (83.55%). Moreover, we conduct an extended experiment on a much more complex wild dataset AFEW and the experimental results further verify the generality of our attention mechanisms. |
---|---|
ISSN: | 2471-285X 2471-285X |
DOI: | 10.1109/TETCI.2021.3070713 |