Emotion recognition based on human gesture and speech information using RT middleware

A bi-modal emotion recognition approach is proposed for recognition of four emotions that integrate information from gestures and speech. The outputs from two unimodal emotion recognition systems based on affective speech and expressive gesture are fused on a decision level fusion by using weight cr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Vu, H. A., Yamazaki, Y., Dong, F., Hirota, K.
Format: Tagungsbericht
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:A bi-modal emotion recognition approach is proposed for recognition of four emotions that integrate information from gestures and speech. The outputs from two unimodal emotion recognition systems based on affective speech and expressive gesture are fused on a decision level fusion by using weight criterion fusion and best probability plus majority vote fusion methods, and the performance of classifier which performs better than each uni-modal and is helpful in recognizing suitable emotions for communication situations. To validate the proposal, fifty Japanese words (or phrases) and 8 types of gestures that are recorded from five participants are used, and the emotion recognition rate increases up to 85.39%. The proposal is able to extent to using more than other modalities and useful in automatic emotion recognition system for human-robot communication.
ISSN:1098-7584
DOI:10.1109/FUZZY.2011.6007557