Speech-Based Emotion Characterization Using Postures and Gestures in CVEs
Collaborative Virtual Environments (CVEs) have become increasingly popular in the past two decades. MostCVEs use avatar systems to represent each user logged into aCVE session. Some avatar systems are capable of expressing emotions with postures, gestures, and facial expressions. Inprevious studies,...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Collaborative Virtual Environments (CVEs) have become increasingly popular in the past two decades. MostCVEs use avatar systems to represent each user logged into aCVE session. Some avatar systems are capable of expressing emotions with postures, gestures, and facial expressions. Inprevious studies, various approaches have been explored to convey emotional states to the computer, including voice and facial movements. We propose a technique to detect emotions in the voice of a speaker and animate avatars to reflect extracted emotions in real-time. The system has been developed in "Project Wonderland, " a Java-based open-source framework for creating collaborative 3D virtual worlds. In our prototype, six primitive emotional states-anger, dislike, fear, happiness, sadness, and surprise-were considered. An emotion classification system which uses short time log frequency power coefficients (LFPC) to represent features and hidden Markov models (HMMs) as the classifier was modified to build an emotion classification unit. Extracted emotions were used to activate existing avatar postures and gestures in Wonderland. |
---|---|
DOI: | 10.1109/CW.2010.75 |