Facial Expression Recognition by Regional Weighting with Approximated Q-Learning

Several facial expression recognition methods cluster facial elements according to similarity and weight them considering the importance of each element in classification. However, these methods are limited by the pre-definitions of units restricting modification of the structure during optimization...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Symmetry (Basel) 2020-02, Vol.12 (2), p.319
Hauptverfasser: Oh, Seong-Gi, Kim, TaeYong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Several facial expression recognition methods cluster facial elements according to similarity and weight them considering the importance of each element in classification. However, these methods are limited by the pre-definitions of units restricting modification of the structure during optimization. This study proposes a modified support vector machine classifier called Grid Map, which is combined with reinforcement learning to improve the classification accuracy. To optimize training, the input image size is normalized according to the cascade rules of a pre-processing detector, and the regional weights are assigned by an adaptive cell size that divides each region of the image using bounding grids. Reducing the size of the bounding grid reduces the area used for feature extraction, allowing more detailed weighted features to be extracted. Error-correcting output codes with a histogram of gradient is selected as the classification method via an experiment to determine the optimal feature and classifier selection. The proposed method is formulated into a decision process and solved via Q-learning. To classify seven emotions, the proposed method exhibits accuracies of 96.36% and 98.47% for four databases and Extended Cohn-–Kanade Dataset (CK+), respectively. Compared to the basic method exhibiting a similar accuracy, the proposed method requires 68.81% fewer features and only 66.33% of the processing time.
ISSN:2073-8994
2073-8994
DOI:10.3390/sym12020319