A new proposed statistical feature extraction method in speech emotion recognition

•A new feature extraction method proposed, using fourteen standard deviation degrees.•RAVDESS, SAVEE and Emo-DB datasets used to evaluate the new feature extraction method.•A speech emotion recognition system designed, testing the recognition power of new extracted features.•High accuracy results ac...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & electrical engineering 2021-07, Vol.93, p.107172, Article 107172
Hauptverfasser: Abdulmohsin, Husam Ali, Abdul wahab, Hala Bahjat, Abdul hossen, Abdul Mohssen Jaber
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•A new feature extraction method proposed, using fourteen standard deviation degrees.•RAVDESS, SAVEE and Emo-DB datasets used to evaluate the new feature extraction method.•A speech emotion recognition system designed, testing the recognition power of new extracted features.•High accuracy results achieved compared to the state of art research, using Neural network. Feature extraction is the most important step in pattern recognition systems, and researchers have extensively focused on this field. This work aims to design and implement a novel feature extraction method that can extract features to recognize different emotions. Through this work, a unimodal speech, real-time, gender and speaker independent speech emotion recognition (SER) framework has been designed and implemented using the newly proposed extracted statistical features. This work’s contribution to feature extraction is the approach followed in extracting the statistical feature that used many degrees of the standard deviation (SD) on either side of the mean rather than 2 SDs on either side of the mean, as all researchers did in the past. In this work, the degrees of deviation on either side of the mean to study the feature distribution variance around the mean are (0.25, 0.5, 0.75, 1, 1.25, 1.5, 1.75, 2, 2.25, 2.5, 2.75, 3, 3.5 and 4). The data sets used in this work were the Ryerson Audio-Visual Database of Emotional Speech and Song dataset (RAVDESS) with eight emotions, the Berlin dataset (Emo-DB) with seven emotions and the Surrey Audio-Visual Expressed Emotion dataset (SAVEE) with seven emotions. Compared to the state-of-the-art unimodal SER approaches, the classification accuracy achieved in this work was near perfect at 86.1%, 96.3% and 91.7% for the RAVDESS, Emo-DB and SAVEE datasets, respectively. [Display omitted]
ISSN:0045-7906
1879-0755
DOI:10.1016/j.compeleceng.2021.107172