Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks

Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on cognitive and developmental systems 2024-12, Vol.16 (6), p.1915-1928
Hauptverfasser: Deng, Haojin, Wang, Shiqi, Yang, Yimin, Zhao, W. G. Will, Zhang, Hui, Wei, Ruizhong, Wu, Q. M. Jonathan, Lu, Bao-Liang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 1928
container_issue 6
container_start_page 1915
container_title IEEE transactions on cognitive and developmental systems
container_volume 16
creator Deng, Haojin
Wang, Shiqi
Yang, Yimin
Zhao, W. G. Will
Zhang, Hui
Wei, Ruizhong
Wu, Q. M. Jonathan
Lu, Bao-Liang
description Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D kernels of convolutional neural networks (CNNs) has rarely been explored in the extant literature. This article addresses these challenges by proposing an EEG-based spatial-frequency-based framework for recognizing human emotion, resulting in fewer human interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84% on the SEED dataset and 68.61% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01%, 92.04%, and 91.74% in valence, arousal, and dominance dimensions, respectively. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on 1-D psychological signals.
doi_str_mv 10.1109/TCDS.2024.3391131
format Article
fullrecord <record><control><sourceid>crossref_RIE</sourceid><recordid>TN_cdi_ieee_primary_10505033</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10505033</ieee_id><sourcerecordid>10_1109_TCDS_2024_3391131</sourcerecordid><originalsourceid>FETCH-LOGICAL-c218t-dff80c16aa04eeed3e5f795f3914fe3ff8f1fe07f0415e5382455712f2bf5e643</originalsourceid><addsrcrecordid>eNpNUMtOwzAQtBBIVKUfgMTBP5DiZ5Nwq9KnVOCQIo6RSdZgaJPIcQrlxKfjtBVCe5jVzsyudhC6pmRIKYlv18kkHTLCxJDzmFJOz1CP8TAOopjH5389I5do0DTvhBA64mEkwh76uTel2ZpvU77i6XSOF-1WlXhZOrAaLJQ53OExTl1b7HGlsefGhaqd2cFBntbKGbXBM1CutX725azKnalK_GzcG54A1Dipyl21abuplz5Aaw_gPiv70VyhC602DQxO2EdPs-k6WQSrx_kyGa-CnNHIBYXWEcnpSCkiAKDgIHUYS-3_FRq4ZzXVQEJNBJUgecSElCFlmr1oCSPB-4ge9-a2ahoLOqut2Sq7zyjJuhSzLsWsSzE7peg9N0eP8Sf_6aUvzvkv3GtvZA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks</title><source>IEEE Electronic Library (IEL)</source><creator>Deng, Haojin ; Wang, Shiqi ; Yang, Yimin ; Zhao, W. G. Will ; Zhang, Hui ; Wei, Ruizhong ; Wu, Q. M. Jonathan ; Lu, Bao-Liang</creator><creatorcontrib>Deng, Haojin ; Wang, Shiqi ; Yang, Yimin ; Zhao, W. G. Will ; Zhang, Hui ; Wei, Ruizhong ; Wu, Q. M. Jonathan ; Lu, Bao-Liang</creatorcontrib><description>Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D kernels of convolutional neural networks (CNNs) has rarely been explored in the extant literature. This article addresses these challenges by proposing an EEG-based spatial-frequency-based framework for recognizing human emotion, resulting in fewer human interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84% on the SEED dataset and 68.61% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01%, 92.04%, and 91.74% in valence, arousal, and dominance dimensions, respectively. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on 1-D psychological signals.</description><identifier>ISSN: 2379-8920</identifier><identifier>EISSN: 2379-8939</identifier><identifier>DOI: 10.1109/TCDS.2024.3391131</identifier><identifier>CODEN: ITCDA4</identifier><language>eng</language><publisher>IEEE</publisher><subject>Brain modeling ; Deep learning ; Electrodes ; Electroencephalography ; electroencephalography (EEG) ; Emotion recognition ; feature combination ; Feature extraction ; Sensor fusion ; Task analysis</subject><ispartof>IEEE transactions on cognitive and developmental systems, 2024-12, Vol.16 (6), p.1915-1928</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><orcidid>0000-0003-2534-6879 ; 0000-0002-1131-2056 ; 0000-0003-2687-0023 ; 0000-0002-1803-3148 ; 0000-0003-2070-9773 ; 0000-0003-1355-6246 ; 0000-0002-5208-7975 ; 0000-0001-8359-0058</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10505033$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10505033$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Deng, Haojin</creatorcontrib><creatorcontrib>Wang, Shiqi</creatorcontrib><creatorcontrib>Yang, Yimin</creatorcontrib><creatorcontrib>Zhao, W. G. Will</creatorcontrib><creatorcontrib>Zhang, Hui</creatorcontrib><creatorcontrib>Wei, Ruizhong</creatorcontrib><creatorcontrib>Wu, Q. M. Jonathan</creatorcontrib><creatorcontrib>Lu, Bao-Liang</creatorcontrib><title>Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks</title><title>IEEE transactions on cognitive and developmental systems</title><addtitle>TCDS</addtitle><description>Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D kernels of convolutional neural networks (CNNs) has rarely been explored in the extant literature. This article addresses these challenges by proposing an EEG-based spatial-frequency-based framework for recognizing human emotion, resulting in fewer human interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84% on the SEED dataset and 68.61% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01%, 92.04%, and 91.74% in valence, arousal, and dominance dimensions, respectively. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on 1-D psychological signals.</description><subject>Brain modeling</subject><subject>Deep learning</subject><subject>Electrodes</subject><subject>Electroencephalography</subject><subject>electroencephalography (EEG)</subject><subject>Emotion recognition</subject><subject>feature combination</subject><subject>Feature extraction</subject><subject>Sensor fusion</subject><subject>Task analysis</subject><issn>2379-8920</issn><issn>2379-8939</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNUMtOwzAQtBBIVKUfgMTBP5DiZ5Nwq9KnVOCQIo6RSdZgaJPIcQrlxKfjtBVCe5jVzsyudhC6pmRIKYlv18kkHTLCxJDzmFJOz1CP8TAOopjH5389I5do0DTvhBA64mEkwh76uTel2ZpvU77i6XSOF-1WlXhZOrAaLJQ53OExTl1b7HGlsefGhaqd2cFBntbKGbXBM1CutX725azKnalK_GzcG54A1Dipyl21abuplz5Aaw_gPiv70VyhC602DQxO2EdPs-k6WQSrx_kyGa-CnNHIBYXWEcnpSCkiAKDgIHUYS-3_FRq4ZzXVQEJNBJUgecSElCFlmr1oCSPB-4ge9-a2ahoLOqut2Sq7zyjJuhSzLsWsSzE7peg9N0eP8Sf_6aUvzvkv3GtvZA</recordid><startdate>20241201</startdate><enddate>20241201</enddate><creator>Deng, Haojin</creator><creator>Wang, Shiqi</creator><creator>Yang, Yimin</creator><creator>Zhao, W. G. Will</creator><creator>Zhang, Hui</creator><creator>Wei, Ruizhong</creator><creator>Wu, Q. M. Jonathan</creator><creator>Lu, Bao-Liang</creator><general>IEEE</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0003-2534-6879</orcidid><orcidid>https://orcid.org/0000-0002-1131-2056</orcidid><orcidid>https://orcid.org/0000-0003-2687-0023</orcidid><orcidid>https://orcid.org/0000-0002-1803-3148</orcidid><orcidid>https://orcid.org/0000-0003-2070-9773</orcidid><orcidid>https://orcid.org/0000-0003-1355-6246</orcidid><orcidid>https://orcid.org/0000-0002-5208-7975</orcidid><orcidid>https://orcid.org/0000-0001-8359-0058</orcidid></search><sort><creationdate>20241201</creationdate><title>Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks</title><author>Deng, Haojin ; Wang, Shiqi ; Yang, Yimin ; Zhao, W. G. Will ; Zhang, Hui ; Wei, Ruizhong ; Wu, Q. M. Jonathan ; Lu, Bao-Liang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c218t-dff80c16aa04eeed3e5f795f3914fe3ff8f1fe07f0415e5382455712f2bf5e643</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Brain modeling</topic><topic>Deep learning</topic><topic>Electrodes</topic><topic>Electroencephalography</topic><topic>electroencephalography (EEG)</topic><topic>Emotion recognition</topic><topic>feature combination</topic><topic>Feature extraction</topic><topic>Sensor fusion</topic><topic>Task analysis</topic><toplevel>online_resources</toplevel><creatorcontrib>Deng, Haojin</creatorcontrib><creatorcontrib>Wang, Shiqi</creatorcontrib><creatorcontrib>Yang, Yimin</creatorcontrib><creatorcontrib>Zhao, W. G. Will</creatorcontrib><creatorcontrib>Zhang, Hui</creatorcontrib><creatorcontrib>Wei, Ruizhong</creatorcontrib><creatorcontrib>Wu, Q. M. Jonathan</creatorcontrib><creatorcontrib>Lu, Bao-Liang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><jtitle>IEEE transactions on cognitive and developmental systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Deng, Haojin</au><au>Wang, Shiqi</au><au>Yang, Yimin</au><au>Zhao, W. G. Will</au><au>Zhang, Hui</au><au>Wei, Ruizhong</au><au>Wu, Q. M. Jonathan</au><au>Lu, Bao-Liang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks</atitle><jtitle>IEEE transactions on cognitive and developmental systems</jtitle><stitle>TCDS</stitle><date>2024-12-01</date><risdate>2024</risdate><volume>16</volume><issue>6</issue><spage>1915</spage><epage>1928</epage><pages>1915-1928</pages><issn>2379-8920</issn><eissn>2379-8939</eissn><coden>ITCDA4</coden><abstract>Emotion is one of the main psychological factors that affects human behavior. Using a neural network model trained with electroencephalography (EEG)-based frequency features has been widely used to accurately recognize human emotions. However, utilizing EEG-based spatial information with popular 2-D kernels of convolutional neural networks (CNNs) has rarely been explored in the extant literature. This article addresses these challenges by proposing an EEG-based spatial-frequency-based framework for recognizing human emotion, resulting in fewer human interference parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. Our proposed method achieved an accuracy of 94.84% on the SEED dataset and 68.61% on the SEED-V dataset with EEG data only. The average accuracy of the Dreamer dataset is 93.01%, 92.04%, and 91.74% in valence, arousal, and dominance dimensions, respectively. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed framework obtains improvements over state-of-the-art methods over these three varied scaled datasets. Furthermore, it also indicates the potential of the proposed framework in conjunction with current ImageNet pretrained models for improving performance on 1-D psychological signals.</abstract><pub>IEEE</pub><doi>10.1109/TCDS.2024.3391131</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0003-2534-6879</orcidid><orcidid>https://orcid.org/0000-0002-1131-2056</orcidid><orcidid>https://orcid.org/0000-0003-2687-0023</orcidid><orcidid>https://orcid.org/0000-0002-1803-3148</orcidid><orcidid>https://orcid.org/0000-0003-2070-9773</orcidid><orcidid>https://orcid.org/0000-0003-1355-6246</orcidid><orcidid>https://orcid.org/0000-0002-5208-7975</orcidid><orcidid>https://orcid.org/0000-0001-8359-0058</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2379-8920
ispartof IEEE transactions on cognitive and developmental systems, 2024-12, Vol.16 (6), p.1915-1928
issn 2379-8920
2379-8939
language eng
recordid cdi_ieee_primary_10505033
source IEEE Electronic Library (IEL)
subjects Brain modeling
Deep learning
Electrodes
Electroencephalography
electroencephalography (EEG)
Emotion recognition
feature combination
Feature extraction
Sensor fusion
Task analysis
title Minimizing EEG Human Interference: A Study of an Adaptive EEG Spatial Feature Extraction With Deep Convolutional Neural Networks
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-03T11%3A12%3A23IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-crossref_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Minimizing%20EEG%20Human%20Interference:%20A%20Study%20of%20an%20Adaptive%20EEG%20Spatial%20Feature%20Extraction%20With%20Deep%20Convolutional%20Neural%20Networks&rft.jtitle=IEEE%20transactions%20on%20cognitive%20and%20developmental%20systems&rft.au=Deng,%20Haojin&rft.date=2024-12-01&rft.volume=16&rft.issue=6&rft.spage=1915&rft.epage=1928&rft.pages=1915-1928&rft.issn=2379-8920&rft.eissn=2379-8939&rft.coden=ITCDA4&rft_id=info:doi/10.1109/TCDS.2024.3391131&rft_dat=%3Ccrossref_RIE%3E10_1109_TCDS_2024_3391131%3C/crossref_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10505033&rfr_iscdi=true