Enhancing Human-Computer Interaction through Emotion Recognition in Real-Life Speech
Extracting the data from real-life speech and recognizing emotions from them is one of the challenging tasks. This has gained popularity over the past few years. The goal of this study is to have a direct human-computer interaction (HCI) to determine the person’s condition or emotion through analyzi...
Gespeichert in:
Veröffentlicht in: | Proceedings of the XXth Conference of Open Innovations Association FRUCT 2023-05, Vol.33 (2), p.415-417 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Extracting the data from real-life speech and recognizing emotions from them is one of the challenging tasks. This has gained popularity over the past few years. The goal of this study is to have a direct human-computer interaction (HCI) to determine the person’s condition or emotion through analyzing voice. The main purpose of this study is to extract emotion from the recorded audio as well as from the analysis of text extracted from the audio which is integrated together in a precise manner. The multimodal integration of speech and text gives good results by observing the emotional state of a person. Machine learning and deep learning algorithms are used to determine the emotional state of a person. The result of this study shows the accuracy around 62%. |
---|---|
ISSN: | 2305-7254 2343-0737 |
DOI: | 10.5281/zenodo.8005393 |