Evaluating the effectiveness of machine learning in identifying the optimal facial electromyography location for emotion detection

Emotional state recognition is crucial for identifying emotions and providing valuable insights into detecting prolonged stress or negative emotions in individuals. In this study, we explore the feasibility of utilizing facial electromyography (fEMG) signals to accurately recognize the emotions and...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Biomedical signal processing and control 2025-02, Vol.100, p.107012, Article 107012
Hauptverfasser: Barigala, Vinay Kumar, P.J., Swarubini, P., Sriram Kumar, Ganapathy, Nagarajan, P.A., Karthik, Kumar, Deepesh, Agastinose Ronickom, Jac Fredo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Emotional state recognition is crucial for identifying emotions and providing valuable insights into detecting prolonged stress or negative emotions in individuals. In this study, we explore the feasibility of utilizing facial electromyography (fEMG) signals to accurately recognize the emotions and determine the optimal recording location. To investigate various emotions, we used the continuously annotated signals of emotion dataset, consisting of fEMG signals captured from three distinct muscle locations: zygomaticus major (zEMG), corrugator supercilii (cEMG), and trapezius (tEMG). These fEMG signals underwent analysis through feature extraction in the time, frequency, and time-frequency domains. We identified the optimal muscle location for recognizing emotions using different machine learning models, such as logistic regression (LR), support vector machine, and random forest (RF), and validated the results using a 10-fold cross-validation approach. Additionally, we identified the most influential features for distinguishing between the emotions using the RF feature ranking method. Our findings showed that we attained the highest average accuracy of 74.79% for emotion classification by utilizing 31 top-ranked features from the time, frequency, and time-frequency domains of three fEMG signals (zEMG, cEMG, and tEMG) with the RF classifier. Moreover, we achieved an average accuracy of 74.17% by utilizing the top 10 time-domain features extracted from the zEMG signals with the LR classifier. In summary, this study demonstrates promising outcomes in utilizing fEMG signals for efficient emotion recognition and presents an innovative approach to the field of affective computing. [Display omitted]
ISSN:1746-8094
DOI:10.1016/j.bspc.2024.107012