A Multimodal Driver Anger Recognition Method Based on Context-Awareness

In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This st...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.118533-118550
Hauptverfasser: Ding, Tongqiang, Zhang, Kexin, Gao, Shuai, Miao, Xinning, Xi, Jianfeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This study proposes a context-aware multi-modal driver anger emotion recognition method (CA-MDER) to address the main issues encountered in multi-modal emotion recognition tasks. These include individual differences among drivers, variability in emotional expression across different driving scenarios, and the inability to capture driving behavior information that represents vehicle-to-vehicle interaction. The method employs Attention Mechanism-Depthwise Separable Convolutional Neural Networks (AM-DSCNN), an improved Support Vector Machines (SVM), and Random Forest (RF) models to perform multi-modal anger emotion recognition using facial, vocal, and driving state information. It also uses Context-Aware Reinforcement Learning (CA-RL) based adaptive weight distribution for multi-modal decision-level fusion. The results show that the proposed method performs well in emotion classification metrics, with an accuracy and F1 score of 91.68% and 90.37%, respectively, demonstrating robust multi-modal emotion recognition performance and powerful emotion recognition capabilities.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2024.3422383