Articulatory-Acoustic Analyses of Mandarin Words in Emotional Context Speech for Smart Campus

Recent years, along with the promotion of smart campus, social networks developed rapidly which demands high accuracy of man-man & man-machine interaction technologies. Thus, physiological information in speech interactive processing has become an important complement or even replaced acoustic-b...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2018-01, Vol.6, p.48418-48427
Hauptverfasser: Ren, Guofeng, Zhang, Xueying, Duan, Shufei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Recent years, along with the promotion of smart campus, social networks developed rapidly which demands high accuracy of man-man & man-machine interaction technologies. Thus, physiological information in speech interactive processing has become an important complement or even replaced acoustic-based features. With the aim of assessing the influence of emotions on articulatory-acoustic features in speech production, the current study explored the articulatory mechanism that underlying emotional speech production of Mandarin words. We first used the AG501 EMA device to collect articulatory and acoustic data synchronously as subjects were speaking specific words in Mandarin with different emotions, e.g., anger, sadness, happiness, and neutral; articulatory and acoustic features then were extracted from the collected data and analyzed in a one-way ANOVA to discover the significance of emotions on articulatory and acoustic features. The results illustrated that the motion of articulators (tongue and lip) were influenced by emotions significantly; in detail, the motion range of tongue and lip with anger were larger than other emotions, meanwhile, tongue speed and lip speed with anger and happiness were more sensitive than with sadness and neutral in emotional words. Results had been discussed to discover the relationship between acoustic and articulatory features of emotional speech, and then the conclusion can be acquired that articulatory motion feature (tongue and lip) may be the major feature of emotional speech recognition, so that which can be applied to the man-machine interaction of smart campus research in the future.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2018.2865831