Leveraging explainable artificial intelligence for emotional label prediction through health sensor monitoring
Emotion recognition, a burgeoning field with applications in healthcare, human-computer interaction, and affective computing, has seen significant advances by integrating physiological signals and environmental factors. With the increasing development of Artificial Intelligence (AI), the precision a...
Gespeichert in:
Veröffentlicht in: | Cluster computing 2025-04, Vol.28 (2), p.86, Article 86 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Emotion recognition, a burgeoning field with applications in healthcare, human-computer interaction, and affective computing, has seen significant advances by integrating physiological signals and environmental factors. With the increasing development of Artificial Intelligence (AI), the precision and efficiency of machine learning (ML) algorithms are becoming increasingly crucial to a growing number of businesses. However, the mystery and black-box effect of ML methods limits our ability to comprehend the underlying applied logic and merely allow us to obtain results. Consequently, understanding the intricate models created for emotion recognition is still vital. ML techniques, such as Random Forest (RF) and Decision Tree (DT) classifiers, were used to predict emotional labels on a dataset collected from an actual study that includes environmental and physiological sensors. In this paper, four performance indicators were used to evaluate the results: precision, recall, precision, and F1 score. Based on the findings, the RF and DT algorithms demonstrated impressive performance with an average accuracy of 98%, precision of 97.8%, recall of 97.8%, and F-measure of 98.2%. Furthermore, this paper discusses the use of Explainable Artificial Intelligence (XAI) techniques, such as Shapley additive explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) that were implemented and applied to the results obtained from these ML methods, to improve the interpretability and transparency of emotion recognition systems that integrate physiological signals and environmental factors. This article investigates the significance of these methods in providing insights into the relationships between human emotions and external stimuli and their potential to advance personalized and context-based applications in various domains. |
---|---|
ISSN: | 1386-7857 1573-7543 |
DOI: | 10.1007/s10586-024-04804-w |