A Multimodal Driver Anger Recognition Method Based on Context-Awareness

In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This st...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.118533-118550
Hauptverfasser: Ding, Tongqiang, Zhang, Kexin, Gao, Shuai, Miao, Xinning, Xi, Jianfeng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 118550
container_issue
container_start_page 118533
container_title IEEE access
container_volume 12
creator Ding, Tongqiang
Zhang, Kexin
Gao, Shuai
Miao, Xinning
Xi, Jianfeng
description In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This study proposes a context-aware multi-modal driver anger emotion recognition method (CA-MDER) to address the main issues encountered in multi-modal emotion recognition tasks. These include individual differences among drivers, variability in emotional expression across different driving scenarios, and the inability to capture driving behavior information that represents vehicle-to-vehicle interaction. The method employs Attention Mechanism-Depthwise Separable Convolutional Neural Networks (AM-DSCNN), an improved Support Vector Machines (SVM), and Random Forest (RF) models to perform multi-modal anger emotion recognition using facial, vocal, and driving state information. It also uses Context-Aware Reinforcement Learning (CA-RL) based adaptive weight distribution for multi-modal decision-level fusion. The results show that the proposed method performs well in emotion classification metrics, with an accuracy and F1 score of 91.68% and 90.37%, respectively, demonstrating robust multi-modal emotion recognition performance and powerful emotion recognition capabilities.
doi_str_mv 10.1109/ACCESS.2024.3422383
format Article
fullrecord <record><control><sourceid>doaj_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10583883</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10583883</ieee_id><doaj_id>oai_doaj_org_article_0c6aafec794a436bb18b0f7624992b0c</doaj_id><sourcerecordid>oai_doaj_org_article_0c6aafec794a436bb18b0f7624992b0c</sourcerecordid><originalsourceid>FETCH-LOGICAL-c216t-974bef5c5a414c2eb743b72a76ab599aecbee32dc595e737e0af7f64e4905eb53</originalsourceid><addsrcrecordid>eNpNkF1LwzAUhoMoOOZ-gV70D3Sm-Wiay1rnHGwITq9Dkp7OjK2RtH79ezM7ZOfifPI-HF6ErjM8zTIsb8uqmq3XU4IJm1JGCC3oGRqRLJcp5TQ_P-kv0aTrtjhGEVdcjNC8TFYfu97tfa13yX1wnxCSst3E_AzWb1rXO98mK-jffJ3c6Q7qJM6Vb3v47tPySwdooeuu0EWjdx1MjnWMXh9mL9VjunyaL6pymdr4RJ9KwQw03HLNMmYJGMGoEUSLXBsupQZrACipLZccBBWAdSOanAGTmIPhdIwWA7f2eqveg9vr8KO8dupv4cNG6dA7uwOFba51A1ZIphnNjckKgxuREyYlMdhGFh1YNviuC9D88zKsDtaqwVp1sFYdrY2qm0HlAOBEwQtaxPMv_5R1cQ</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>A Multimodal Driver Anger Recognition Method Based on Context-Awareness</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Ding, Tongqiang ; Zhang, Kexin ; Gao, Shuai ; Miao, Xinning ; Xi, Jianfeng</creator><creatorcontrib>Ding, Tongqiang ; Zhang, Kexin ; Gao, Shuai ; Miao, Xinning ; Xi, Jianfeng</creatorcontrib><description>In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This study proposes a context-aware multi-modal driver anger emotion recognition method (CA-MDER) to address the main issues encountered in multi-modal emotion recognition tasks. These include individual differences among drivers, variability in emotional expression across different driving scenarios, and the inability to capture driving behavior information that represents vehicle-to-vehicle interaction. The method employs Attention Mechanism-Depthwise Separable Convolutional Neural Networks (AM-DSCNN), an improved Support Vector Machines (SVM), and Random Forest (RF) models to perform multi-modal anger emotion recognition using facial, vocal, and driving state information. It also uses Context-Aware Reinforcement Learning (CA-RL) based adaptive weight distribution for multi-modal decision-level fusion. The results show that the proposed method performs well in emotion classification metrics, with an accuracy and F1 score of 91.68% and 90.37%, respectively, demonstrating robust multi-modal emotion recognition performance and powerful emotion recognition capabilities.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2024.3422383</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>IEEE</publisher><subject>Accuracy ; Context awareness ; Convolutional neural networks ; driving state emotion recognition ; Emotion recognition ; emotional expression heterogeneity ; Face recognition ; Feature extraction ; Heterogeneous networks ; Machine learning ; multimodal emotion recognition ; Speech recognition ; Vehicles</subject><ispartof>IEEE access, 2024, Vol.12, p.118533-118550</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c216t-974bef5c5a414c2eb743b72a76ab599aecbee32dc595e737e0af7f64e4905eb53</cites><orcidid>0000-0002-4488-0850 ; 0000-0002-2212-961X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10583883$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>315,782,786,866,2104,4026,27640,27930,27931,27932,54940</link.rule.ids></links><search><creatorcontrib>Ding, Tongqiang</creatorcontrib><creatorcontrib>Zhang, Kexin</creatorcontrib><creatorcontrib>Gao, Shuai</creatorcontrib><creatorcontrib>Miao, Xinning</creatorcontrib><creatorcontrib>Xi, Jianfeng</creatorcontrib><title>A Multimodal Driver Anger Recognition Method Based on Context-Awareness</title><title>IEEE access</title><addtitle>Access</addtitle><description>In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This study proposes a context-aware multi-modal driver anger emotion recognition method (CA-MDER) to address the main issues encountered in multi-modal emotion recognition tasks. These include individual differences among drivers, variability in emotional expression across different driving scenarios, and the inability to capture driving behavior information that represents vehicle-to-vehicle interaction. The method employs Attention Mechanism-Depthwise Separable Convolutional Neural Networks (AM-DSCNN), an improved Support Vector Machines (SVM), and Random Forest (RF) models to perform multi-modal anger emotion recognition using facial, vocal, and driving state information. It also uses Context-Aware Reinforcement Learning (CA-RL) based adaptive weight distribution for multi-modal decision-level fusion. The results show that the proposed method performs well in emotion classification metrics, with an accuracy and F1 score of 91.68% and 90.37%, respectively, demonstrating robust multi-modal emotion recognition performance and powerful emotion recognition capabilities.</description><subject>Accuracy</subject><subject>Context awareness</subject><subject>Convolutional neural networks</subject><subject>driving state emotion recognition</subject><subject>Emotion recognition</subject><subject>emotional expression heterogeneity</subject><subject>Face recognition</subject><subject>Feature extraction</subject><subject>Heterogeneous networks</subject><subject>Machine learning</subject><subject>multimodal emotion recognition</subject><subject>Speech recognition</subject><subject>Vehicles</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkF1LwzAUhoMoOOZ-gV70D3Sm-Wiay1rnHGwITq9Dkp7OjK2RtH79ezM7ZOfifPI-HF6ErjM8zTIsb8uqmq3XU4IJm1JGCC3oGRqRLJcp5TQ_P-kv0aTrtjhGEVdcjNC8TFYfu97tfa13yX1wnxCSst3E_AzWb1rXO98mK-jffJ3c6Q7qJM6Vb3v47tPySwdooeuu0EWjdx1MjnWMXh9mL9VjunyaL6pymdr4RJ9KwQw03HLNMmYJGMGoEUSLXBsupQZrACipLZccBBWAdSOanAGTmIPhdIwWA7f2eqveg9vr8KO8dupv4cNG6dA7uwOFba51A1ZIphnNjckKgxuREyYlMdhGFh1YNviuC9D88zKsDtaqwVp1sFYdrY2qm0HlAOBEwQtaxPMv_5R1cQ</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Ding, Tongqiang</creator><creator>Zhang, Kexin</creator><creator>Gao, Shuai</creator><creator>Miao, Xinning</creator><creator>Xi, Jianfeng</creator><general>IEEE</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-4488-0850</orcidid><orcidid>https://orcid.org/0000-0002-2212-961X</orcidid></search><sort><creationdate>2024</creationdate><title>A Multimodal Driver Anger Recognition Method Based on Context-Awareness</title><author>Ding, Tongqiang ; Zhang, Kexin ; Gao, Shuai ; Miao, Xinning ; Xi, Jianfeng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c216t-974bef5c5a414c2eb743b72a76ab599aecbee32dc595e737e0af7f64e4905eb53</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Context awareness</topic><topic>Convolutional neural networks</topic><topic>driving state emotion recognition</topic><topic>Emotion recognition</topic><topic>emotional expression heterogeneity</topic><topic>Face recognition</topic><topic>Feature extraction</topic><topic>Heterogeneous networks</topic><topic>Machine learning</topic><topic>multimodal emotion recognition</topic><topic>Speech recognition</topic><topic>Vehicles</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ding, Tongqiang</creatorcontrib><creatorcontrib>Zhang, Kexin</creatorcontrib><creatorcontrib>Gao, Shuai</creatorcontrib><creatorcontrib>Miao, Xinning</creatorcontrib><creatorcontrib>Xi, Jianfeng</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ding, Tongqiang</au><au>Zhang, Kexin</au><au>Gao, Shuai</au><au>Miao, Xinning</au><au>Xi, Jianfeng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Multimodal Driver Anger Recognition Method Based on Context-Awareness</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2024</date><risdate>2024</risdate><volume>12</volume><spage>118533</spage><epage>118550</epage><pages>118533-118550</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>In today's society, the harm of driving anger to traffic safety is increasingly prominent. With the development of human-computer interaction and intelligent transportation systems, the application of biometric technology in driver emotion recognition has attracted widespread attention. This study proposes a context-aware multi-modal driver anger emotion recognition method (CA-MDER) to address the main issues encountered in multi-modal emotion recognition tasks. These include individual differences among drivers, variability in emotional expression across different driving scenarios, and the inability to capture driving behavior information that represents vehicle-to-vehicle interaction. The method employs Attention Mechanism-Depthwise Separable Convolutional Neural Networks (AM-DSCNN), an improved Support Vector Machines (SVM), and Random Forest (RF) models to perform multi-modal anger emotion recognition using facial, vocal, and driving state information. It also uses Context-Aware Reinforcement Learning (CA-RL) based adaptive weight distribution for multi-modal decision-level fusion. The results show that the proposed method performs well in emotion classification metrics, with an accuracy and F1 score of 91.68% and 90.37%, respectively, demonstrating robust multi-modal emotion recognition performance and powerful emotion recognition capabilities.</abstract><pub>IEEE</pub><doi>10.1109/ACCESS.2024.3422383</doi><tpages>18</tpages><orcidid>https://orcid.org/0000-0002-4488-0850</orcidid><orcidid>https://orcid.org/0000-0002-2212-961X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2024, Vol.12, p.118533-118550
issn 2169-3536
2169-3536
language eng
recordid cdi_ieee_primary_10583883
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Accuracy
Context awareness
Convolutional neural networks
driving state emotion recognition
Emotion recognition
emotional expression heterogeneity
Face recognition
Feature extraction
Heterogeneous networks
Machine learning
multimodal emotion recognition
Speech recognition
Vehicles
title A Multimodal Driver Anger Recognition Method Based on Context-Awareness
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-04T19%3A46%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-doaj_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Multimodal%20Driver%20Anger%20Recognition%20Method%20Based%20on%20Context-Awareness&rft.jtitle=IEEE%20access&rft.au=Ding,%20Tongqiang&rft.date=2024&rft.volume=12&rft.spage=118533&rft.epage=118550&rft.pages=118533-118550&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2024.3422383&rft_dat=%3Cdoaj_ieee_%3Eoai_doaj_org_article_0c6aafec794a436bb18b0f7624992b0c%3C/doaj_ieee_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10583883&rft_doaj_id=oai_doaj_org_article_0c6aafec794a436bb18b0f7624992b0c&rfr_iscdi=true