Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion

Multimodal emotion recognition has gained traction in affective computing research community to overcome the limitations posed by the processing a single form of data and to increase recognition robustness. In this study, a novel emotion recognition system is introduced, which is based on multiple m...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.168865-168878
Hauptverfasser: Cimtay, Yucel, Ekmekcioglu, Erhan, Caglar-Ozhan, Seyma
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 168878
container_issue
container_start_page 168865
container_title IEEE access
container_volume 8
creator Cimtay, Yucel
Ekmekcioglu, Erhan
Caglar-Ozhan, Seyma
description Multimodal emotion recognition has gained traction in affective computing research community to overcome the limitations posed by the processing a single form of data and to increase recognition robustness. In this study, a novel emotion recognition system is introduced, which is based on multiple modalities including facial expressions, galvanic skin response (GSR) and electroencephalogram (EEG). This method follows a hybrid fusion strategy and yields a maximum one-subject-out accuracy of 81.2% and a mean accuracy of 74.2% on our bespoke multimodal emotion dataset (LUMED-2) for 3 emotion classes: sad, neutral and happy. Similarly, our approach yields a maximum one-subject-out accuracy of 91.5% and a mean accuracy of 53.8% on the Database for Emotion Analysis using Physiological Signals (DEAP) for varying numbers of emotion classes, 4 in average, including angry, disgust, afraid, happy, neutral, sad and surprised. The presented model is particularly useful in determining the correct emotional state in the case of natural deceptive facial expressions. In terms of emotion recognition accuracy, this study is superior to, or on par with, the reference subject-independent multimodal emotion recognition studies introduced in the literature.
doi_str_mv 10.1109/ACCESS.2020.3023871
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2020_3023871</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9195813</ieee_id><doaj_id>oai_doaj_org_article_28a5b17d5e7c44ba9bbcda163631ff1a</doaj_id><sourcerecordid>2454679139</sourcerecordid><originalsourceid>FETCH-LOGICAL-c458t-6a4dab1d13886cb4c5eaaa5ef3c5795c9ee01f25cbdb6eb1cb3fa47c58ff5f773</originalsourceid><addsrcrecordid>eNpNkE9rwkAQxUNpoWL9BF4CPcdmstls9miDVsFSqO15mf0nCeraTXLot-9qRDqXeTzmvV1-UTSFdAaQ8pd5VS2221mWZumMpBkpGdxFowwKnhBKivt_-jGatG2ThimDRdkoqirv2jbZ9rIxqovf-31XH5zGfbw4uK52x_jTKLc71hf9iq3RcRCrX-lrHS_7NthP0YPFfWsm1z2OvpeLr2qVbD7e1tV8k6icll1SYK5RggZSloWSuaIGEamxRFHGqeLGpGAzqqSWhZGgJLGYM0VLa6lljIyj9dCrHTbi5OsD-l_hsBYXw_mdQN_Vam9EViKVwDQ1TOW5RC6l0ggFKQhYCxi6noeuk3c_vWk70bjeH8P3RZbTvGAcCA9XZLhSZ0re2NurkIozfDHAF2f44go_pKZDqjbG3BIcOC2BkD8scoEO</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2454679139</pqid></control><display><type>article</type><title>Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><creator>Cimtay, Yucel ; Ekmekcioglu, Erhan ; Caglar-Ozhan, Seyma</creator><creatorcontrib>Cimtay, Yucel ; Ekmekcioglu, Erhan ; Caglar-Ozhan, Seyma</creatorcontrib><description>Multimodal emotion recognition has gained traction in affective computing research community to overcome the limitations posed by the processing a single form of data and to increase recognition robustness. In this study, a novel emotion recognition system is introduced, which is based on multiple modalities including facial expressions, galvanic skin response (GSR) and electroencephalogram (EEG). This method follows a hybrid fusion strategy and yields a maximum one-subject-out accuracy of 81.2% and a mean accuracy of 74.2% on our bespoke multimodal emotion dataset (LUMED-2) for 3 emotion classes: sad, neutral and happy. Similarly, our approach yields a maximum one-subject-out accuracy of 91.5% and a mean accuracy of 53.8% on the Database for Emotion Analysis using Physiological Signals (DEAP) for varying numbers of emotion classes, 4 in average, including angry, disgust, afraid, happy, neutral, sad and surprised. The presented model is particularly useful in determining the correct emotional state in the case of natural deceptive facial expressions. In terms of emotion recognition accuracy, this study is superior to, or on par with, the reference subject-independent multimodal emotion recognition studies introduced in the literature.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2020.3023871</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Accuracy ; Affective computing ; Brain modeling ; convolutional neural network ; Data models ; electroencephalogram ; Electroencephalography ; Emotion recognition ; Emotions ; Feature extraction ; Galvanic skin response ; multimodal data fusion ; multimodal emotion recognition ; Physiology ; Support vector machines</subject><ispartof>IEEE access, 2020, Vol.8, p.168865-168878</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c458t-6a4dab1d13886cb4c5eaaa5ef3c5795c9ee01f25cbdb6eb1cb3fa47c58ff5f773</citedby><cites>FETCH-LOGICAL-c458t-6a4dab1d13886cb4c5eaaa5ef3c5795c9ee01f25cbdb6eb1cb3fa47c58ff5f773</cites><orcidid>0000-0002-3759-4629</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9195813$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Cimtay, Yucel</creatorcontrib><creatorcontrib>Ekmekcioglu, Erhan</creatorcontrib><creatorcontrib>Caglar-Ozhan, Seyma</creatorcontrib><title>Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion</title><title>IEEE access</title><addtitle>Access</addtitle><description>Multimodal emotion recognition has gained traction in affective computing research community to overcome the limitations posed by the processing a single form of data and to increase recognition robustness. In this study, a novel emotion recognition system is introduced, which is based on multiple modalities including facial expressions, galvanic skin response (GSR) and electroencephalogram (EEG). This method follows a hybrid fusion strategy and yields a maximum one-subject-out accuracy of 81.2% and a mean accuracy of 74.2% on our bespoke multimodal emotion dataset (LUMED-2) for 3 emotion classes: sad, neutral and happy. Similarly, our approach yields a maximum one-subject-out accuracy of 91.5% and a mean accuracy of 53.8% on the Database for Emotion Analysis using Physiological Signals (DEAP) for varying numbers of emotion classes, 4 in average, including angry, disgust, afraid, happy, neutral, sad and surprised. The presented model is particularly useful in determining the correct emotional state in the case of natural deceptive facial expressions. In terms of emotion recognition accuracy, this study is superior to, or on par with, the reference subject-independent multimodal emotion recognition studies introduced in the literature.</description><subject>Accuracy</subject><subject>Affective computing</subject><subject>Brain modeling</subject><subject>convolutional neural network</subject><subject>Data models</subject><subject>electroencephalogram</subject><subject>Electroencephalography</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Galvanic skin response</subject><subject>multimodal data fusion</subject><subject>multimodal emotion recognition</subject><subject>Physiology</subject><subject>Support vector machines</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkE9rwkAQxUNpoWL9BF4CPcdmstls9miDVsFSqO15mf0nCeraTXLot-9qRDqXeTzmvV1-UTSFdAaQ8pd5VS2221mWZumMpBkpGdxFowwKnhBKivt_-jGatG2ThimDRdkoqirv2jbZ9rIxqovf-31XH5zGfbw4uK52x_jTKLc71hf9iq3RcRCrX-lrHS_7NthP0YPFfWsm1z2OvpeLr2qVbD7e1tV8k6icll1SYK5RggZSloWSuaIGEamxRFHGqeLGpGAzqqSWhZGgJLGYM0VLa6lljIyj9dCrHTbi5OsD-l_hsBYXw_mdQN_Vam9EViKVwDQ1TOW5RC6l0ggFKQhYCxi6noeuk3c_vWk70bjeH8P3RZbTvGAcCA9XZLhSZ0re2NurkIozfDHAF2f44go_pKZDqjbG3BIcOC2BkD8scoEO</recordid><startdate>2020</startdate><enddate>2020</enddate><creator>Cimtay, Yucel</creator><creator>Ekmekcioglu, Erhan</creator><creator>Caglar-Ozhan, Seyma</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-3759-4629</orcidid></search><sort><creationdate>2020</creationdate><title>Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion</title><author>Cimtay, Yucel ; Ekmekcioglu, Erhan ; Caglar-Ozhan, Seyma</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c458t-6a4dab1d13886cb4c5eaaa5ef3c5795c9ee01f25cbdb6eb1cb3fa47c58ff5f773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Accuracy</topic><topic>Affective computing</topic><topic>Brain modeling</topic><topic>convolutional neural network</topic><topic>Data models</topic><topic>electroencephalogram</topic><topic>Electroencephalography</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Galvanic skin response</topic><topic>multimodal data fusion</topic><topic>multimodal emotion recognition</topic><topic>Physiology</topic><topic>Support vector machines</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Cimtay, Yucel</creatorcontrib><creatorcontrib>Ekmekcioglu, Erhan</creatorcontrib><creatorcontrib>Caglar-Ozhan, Seyma</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Cimtay, Yucel</au><au>Ekmekcioglu, Erhan</au><au>Caglar-Ozhan, Seyma</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2020</date><risdate>2020</risdate><volume>8</volume><spage>168865</spage><epage>168878</epage><pages>168865-168878</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Multimodal emotion recognition has gained traction in affective computing research community to overcome the limitations posed by the processing a single form of data and to increase recognition robustness. In this study, a novel emotion recognition system is introduced, which is based on multiple modalities including facial expressions, galvanic skin response (GSR) and electroencephalogram (EEG). This method follows a hybrid fusion strategy and yields a maximum one-subject-out accuracy of 81.2% and a mean accuracy of 74.2% on our bespoke multimodal emotion dataset (LUMED-2) for 3 emotion classes: sad, neutral and happy. Similarly, our approach yields a maximum one-subject-out accuracy of 91.5% and a mean accuracy of 53.8% on the Database for Emotion Analysis using Physiological Signals (DEAP) for varying numbers of emotion classes, 4 in average, including angry, disgust, afraid, happy, neutral, sad and surprised. The presented model is particularly useful in determining the correct emotional state in the case of natural deceptive facial expressions. In terms of emotion recognition accuracy, this study is superior to, or on par with, the reference subject-independent multimodal emotion recognition studies introduced in the literature.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2020.3023871</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0002-3759-4629</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2020, Vol.8, p.168865-168878
issn 2169-3536
2169-3536
language eng
recordid cdi_crossref_primary_10_1109_ACCESS_2020_3023871
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals
subjects Accuracy
Affective computing
Brain modeling
convolutional neural network
Data models
electroencephalogram
Electroencephalography
Emotion recognition
Emotions
Feature extraction
Galvanic skin response
multimodal data fusion
multimodal emotion recognition
Physiology
Support vector machines
title Cross-Subject Multimodal Emotion Recognition Based on Hybrid Fusion
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T20%3A03%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Cross-Subject%20Multimodal%20Emotion%20Recognition%20Based%20on%20Hybrid%20Fusion&rft.jtitle=IEEE%20access&rft.au=Cimtay,%20Yucel&rft.date=2020&rft.volume=8&rft.spage=168865&rft.epage=168878&rft.pages=168865-168878&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2020.3023871&rft_dat=%3Cproquest_cross%3E2454679139%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2454679139&rft_id=info:pmid/&rft_ieee_id=9195813&rft_doaj_id=oai_doaj_org_article_28a5b17d5e7c44ba9bbcda163631ff1a&rfr_iscdi=true