Emotion detection from EEG recordings based on supervised and unsupervised dimension reduction
Summary In recent years, researchers have been trying to detect human emotions from recorded brain signals such as electroencephalogram (EEG) signals. However, due to the high levels of noise from the EEG recordings, a single feature alone cannot achieve good performance. A combination of distinct f...
Gespeichert in:
Veröffentlicht in: | Concurrency and computation 2018-12, Vol.30 (23), p.n/a |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | n/a |
---|---|
container_issue | 23 |
container_start_page | |
container_title | Concurrency and computation |
container_volume | 30 |
creator | Liu, Jingxin Meng, Hongying Li, Maozhen Zhang, Fan Qin, Rui Nandi, Asoke K. |
description | Summary
In recent years, researchers have been trying to detect human emotions from recorded brain signals such as electroencephalogram (EEG) signals. However, due to the high levels of noise from the EEG recordings, a single feature alone cannot achieve good performance. A combination of distinct features is the key for automatic emotion detection. In this paper, we present a hybrid dimension feature reduction scheme using a total of 14 different features extracted from EEG recordings. The scheme combines these distinct features in the feature space using both supervised and unsupervised feature selection processes. Maximum Relevance Minimum Redundancy (mRMR) is applied to re‐order the combined features into max‐relevance with the labels and min‐redundancy of each feature. The generated features are further reduced with principal component analysis (PCA) for extracting the principal components. Experimental results show that the proposed work outperforms the state‐of‐art methods using the same settings in the publicly available DEAP data set. |
doi_str_mv | 10.1002/cpe.4446 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2132156655</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2132156655</sourcerecordid><originalsourceid>FETCH-LOGICAL-c3276-40850becbaf40546dc882e40a7ea3276a22be209260eee3f7b1d0f5093b6e1a23</originalsourceid><addsrcrecordid>eNp1kE9Lw0AQxRdRsFbBjxDw4iV19m_bo4RYhYIe9Oqy2Z1ISpONu43Sb98kFfHiad7M_HgPHiHXFGYUgN3ZFmdCCHVCJlRyloLi4vRXM3VOLmLcAFAKnE7Ie177XeWbxOEO7ajK4Oskz1dJQOuDq5qPmBQmokv6Z-xaDF_VsJnGJV3z5-CqGps4WAR03Wh2Sc5Ks4149TOn5O0hf80e0_Xz6im7X6eWs7lKBSwkFGgLUwqQQjm7WDAUYOZoBsAwViCDJVOAiLycF9RBKWHJC4XUMD4lN0ffNvjPDuNOb3wXmj5SM8oZlUpJ2VO3R8oGH2PAUrehqk3Yawp6aE_37emhvR5Nj-h3tcX9v5zOXvKRPwBHmXE1</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2132156655</pqid></control><display><type>article</type><title>Emotion detection from EEG recordings based on supervised and unsupervised dimension reduction</title><source>Access via Wiley Online Library</source><creator>Liu, Jingxin ; Meng, Hongying ; Li, Maozhen ; Zhang, Fan ; Qin, Rui ; Nandi, Asoke K.</creator><creatorcontrib>Liu, Jingxin ; Meng, Hongying ; Li, Maozhen ; Zhang, Fan ; Qin, Rui ; Nandi, Asoke K.</creatorcontrib><description>Summary
In recent years, researchers have been trying to detect human emotions from recorded brain signals such as electroencephalogram (EEG) signals. However, due to the high levels of noise from the EEG recordings, a single feature alone cannot achieve good performance. A combination of distinct features is the key for automatic emotion detection. In this paper, we present a hybrid dimension feature reduction scheme using a total of 14 different features extracted from EEG recordings. The scheme combines these distinct features in the feature space using both supervised and unsupervised feature selection processes. Maximum Relevance Minimum Redundancy (mRMR) is applied to re‐order the combined features into max‐relevance with the labels and min‐redundancy of each feature. The generated features are further reduced with principal component analysis (PCA) for extracting the principal components. Experimental results show that the proposed work outperforms the state‐of‐art methods using the same settings in the publicly available DEAP data set.</description><identifier>ISSN: 1532-0626</identifier><identifier>EISSN: 1532-0634</identifier><identifier>DOI: 10.1002/cpe.4446</identifier><language>eng</language><publisher>Hoboken: Wiley Subscription Services, Inc</publisher><subject>affective computing ; Brain ; EEG ; Electroencephalography ; emotion detection ; feature dimension reduction ; Feature extraction ; feature selection ; Principal components analysis ; Reduction ; Redundancy</subject><ispartof>Concurrency and computation, 2018-12, Vol.30 (23), p.n/a</ispartof><rights>2018 The Authors. Published by John Wiley & Sons, Ltd.</rights><rights>2018 John Wiley & Sons, Ltd.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c3276-40850becbaf40546dc882e40a7ea3276a22be209260eee3f7b1d0f5093b6e1a23</citedby><cites>FETCH-LOGICAL-c3276-40850becbaf40546dc882e40a7ea3276a22be209260eee3f7b1d0f5093b6e1a23</cites><orcidid>0000-0002-8836-1382</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://onlinelibrary.wiley.com/doi/pdf/10.1002%2Fcpe.4446$$EPDF$$P50$$Gwiley$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://onlinelibrary.wiley.com/doi/full/10.1002%2Fcpe.4446$$EHTML$$P50$$Gwiley$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,1417,27924,27925,45574,45575</link.rule.ids></links><search><creatorcontrib>Liu, Jingxin</creatorcontrib><creatorcontrib>Meng, Hongying</creatorcontrib><creatorcontrib>Li, Maozhen</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Qin, Rui</creatorcontrib><creatorcontrib>Nandi, Asoke K.</creatorcontrib><title>Emotion detection from EEG recordings based on supervised and unsupervised dimension reduction</title><title>Concurrency and computation</title><description>Summary
In recent years, researchers have been trying to detect human emotions from recorded brain signals such as electroencephalogram (EEG) signals. However, due to the high levels of noise from the EEG recordings, a single feature alone cannot achieve good performance. A combination of distinct features is the key for automatic emotion detection. In this paper, we present a hybrid dimension feature reduction scheme using a total of 14 different features extracted from EEG recordings. The scheme combines these distinct features in the feature space using both supervised and unsupervised feature selection processes. Maximum Relevance Minimum Redundancy (mRMR) is applied to re‐order the combined features into max‐relevance with the labels and min‐redundancy of each feature. The generated features are further reduced with principal component analysis (PCA) for extracting the principal components. Experimental results show that the proposed work outperforms the state‐of‐art methods using the same settings in the publicly available DEAP data set.</description><subject>affective computing</subject><subject>Brain</subject><subject>EEG</subject><subject>Electroencephalography</subject><subject>emotion detection</subject><subject>feature dimension reduction</subject><subject>Feature extraction</subject><subject>feature selection</subject><subject>Principal components analysis</subject><subject>Reduction</subject><subject>Redundancy</subject><issn>1532-0626</issn><issn>1532-0634</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>24P</sourceid><sourceid>WIN</sourceid><recordid>eNp1kE9Lw0AQxRdRsFbBjxDw4iV19m_bo4RYhYIe9Oqy2Z1ISpONu43Sb98kFfHiad7M_HgPHiHXFGYUgN3ZFmdCCHVCJlRyloLi4vRXM3VOLmLcAFAKnE7Ie177XeWbxOEO7ajK4Oskz1dJQOuDq5qPmBQmokv6Z-xaDF_VsJnGJV3z5-CqGps4WAR03Wh2Sc5Ks4149TOn5O0hf80e0_Xz6im7X6eWs7lKBSwkFGgLUwqQQjm7WDAUYOZoBsAwViCDJVOAiLycF9RBKWHJC4XUMD4lN0ffNvjPDuNOb3wXmj5SM8oZlUpJ2VO3R8oGH2PAUrehqk3Yawp6aE_37emhvR5Nj-h3tcX9v5zOXvKRPwBHmXE1</recordid><startdate>20181210</startdate><enddate>20181210</enddate><creator>Liu, Jingxin</creator><creator>Meng, Hongying</creator><creator>Li, Maozhen</creator><creator>Zhang, Fan</creator><creator>Qin, Rui</creator><creator>Nandi, Asoke K.</creator><general>Wiley Subscription Services, Inc</general><scope>24P</scope><scope>WIN</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0002-8836-1382</orcidid></search><sort><creationdate>20181210</creationdate><title>Emotion detection from EEG recordings based on supervised and unsupervised dimension reduction</title><author>Liu, Jingxin ; Meng, Hongying ; Li, Maozhen ; Zhang, Fan ; Qin, Rui ; Nandi, Asoke K.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c3276-40850becbaf40546dc882e40a7ea3276a22be209260eee3f7b1d0f5093b6e1a23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>affective computing</topic><topic>Brain</topic><topic>EEG</topic><topic>Electroencephalography</topic><topic>emotion detection</topic><topic>feature dimension reduction</topic><topic>Feature extraction</topic><topic>feature selection</topic><topic>Principal components analysis</topic><topic>Reduction</topic><topic>Redundancy</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Liu, Jingxin</creatorcontrib><creatorcontrib>Meng, Hongying</creatorcontrib><creatorcontrib>Li, Maozhen</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Qin, Rui</creatorcontrib><creatorcontrib>Nandi, Asoke K.</creatorcontrib><collection>Wiley Online Library (Open Access Collection)</collection><collection>Wiley Online Library (Open Access Collection)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Concurrency and computation</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Liu, Jingxin</au><au>Meng, Hongying</au><au>Li, Maozhen</au><au>Zhang, Fan</au><au>Qin, Rui</au><au>Nandi, Asoke K.</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Emotion detection from EEG recordings based on supervised and unsupervised dimension reduction</atitle><jtitle>Concurrency and computation</jtitle><date>2018-12-10</date><risdate>2018</risdate><volume>30</volume><issue>23</issue><epage>n/a</epage><issn>1532-0626</issn><eissn>1532-0634</eissn><abstract>Summary
In recent years, researchers have been trying to detect human emotions from recorded brain signals such as electroencephalogram (EEG) signals. However, due to the high levels of noise from the EEG recordings, a single feature alone cannot achieve good performance. A combination of distinct features is the key for automatic emotion detection. In this paper, we present a hybrid dimension feature reduction scheme using a total of 14 different features extracted from EEG recordings. The scheme combines these distinct features in the feature space using both supervised and unsupervised feature selection processes. Maximum Relevance Minimum Redundancy (mRMR) is applied to re‐order the combined features into max‐relevance with the labels and min‐redundancy of each feature. The generated features are further reduced with principal component analysis (PCA) for extracting the principal components. Experimental results show that the proposed work outperforms the state‐of‐art methods using the same settings in the publicly available DEAP data set.</abstract><cop>Hoboken</cop><pub>Wiley Subscription Services, Inc</pub><doi>10.1002/cpe.4446</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0002-8836-1382</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1532-0626 |
ispartof | Concurrency and computation, 2018-12, Vol.30 (23), p.n/a |
issn | 1532-0626 1532-0634 |
language | eng |
recordid | cdi_proquest_journals_2132156655 |
source | Access via Wiley Online Library |
subjects | affective computing Brain EEG Electroencephalography emotion detection feature dimension reduction Feature extraction feature selection Principal components analysis Reduction Redundancy |
title | Emotion detection from EEG recordings based on supervised and unsupervised dimension reduction |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T14%3A15%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Emotion%20detection%20from%20EEG%20recordings%20based%20on%20supervised%20and%20unsupervised%20dimension%20reduction&rft.jtitle=Concurrency%20and%20computation&rft.au=Liu,%20Jingxin&rft.date=2018-12-10&rft.volume=30&rft.issue=23&rft.epage=n/a&rft.issn=1532-0626&rft.eissn=1532-0634&rft_id=info:doi/10.1002/cpe.4446&rft_dat=%3Cproquest_cross%3E2132156655%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2132156655&rft_id=info:pmid/&rfr_iscdi=true |