Optimal trained ensemble of classification model for speech emotion recognition: Considering cross-lingual and multi-lingual scenarios
Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows...
Gespeichert in:
Veröffentlicht in: | Multimedia tools and applications 2024-05, Vol.83 (18), p.54331-54365 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Speech has a significant role in conveying emotional information, and SER has emerged as a crucial component of the human–computer interface that has high real-time and accuracy needs. This paper proposes a novel Improved Coot optimization-based Ensemble Classification (ICO-EC) for SER that follows three stages: preprocessing, feature extraction, and classification. The model starts with the preprocessing step, where the class imbalance problem is resolved using Improved SMOTE-ENC. Subsequently, in the feature extraction stage, IMFCC-based features, Chroma-based features, ZCR-based features, and spectral roll-off-based features are extracted. The last stage is classification; in this, an ensemble classification model is used, which combines the classifiers including Deep Maxout, LSTM and ICNN, respectively. Here, the training process is made optimal via an Improved Coot Optimization (ICO) by tuning the optimal weights. At last, the performances of the developed model are validated with conventional methods with four different databases. Also, the proposed model for cross-lingual provides a better accuracy as 92.76% for Hindi, 92.95% for Kannada, 93.85% for Telugu, and 95.97% for Urdu, respectively. The ICO-CE model outperformed 93% accuracy in the Hindi dataset over other models. |
---|---|
ISSN: | 1573-7721 1380-7501 1573-7721 |
DOI: | 10.1007/s11042-023-17097-9 |