MLDA: Multi-Loss Domain Adaptor for Cross-Session and Cross-Emotion EEG-Based Individual Identification

Traditional individual identification methods, such as face and fingerprint recognition, carry the risk of personal information leakage. The uniqueness and privacy of electroencephalograms (EEG) and the popularization of EEG acquisition devices have intensified research on EEG-based individual ident...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE journal of biomedical and health informatics 2023-12, Vol.27 (12), p.5767-5778
Hauptverfasser: Miao, Yifan, Jiang, Wanqing, Su, Nuo, Shan, Jun, Jiang, Tianzi, Zuo, Nianming
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 5778
container_issue 12
container_start_page 5767
container_title IEEE journal of biomedical and health informatics
container_volume 27
creator Miao, Yifan
Jiang, Wanqing
Su, Nuo
Shan, Jun
Jiang, Tianzi
Zuo, Nianming
description Traditional individual identification methods, such as face and fingerprint recognition, carry the risk of personal information leakage. The uniqueness and privacy of electroencephalograms (EEG) and the popularization of EEG acquisition devices have intensified research on EEG-based individual identification in recent years. However, most existing work uses EEG signals from a single session or emotion, ignoring large differences between domains. As EEG signals do not satisfy the traditional deep learning assumption that training and test sets are independently and identically distributed, it is difficult for trained models to maintain good classification performance for new sessions or new emotions. In this article, an individual identification method, called Multi-Loss Domain Adaptor (MLDA), is proposed to deal with the differences between marginal and conditional distributions elicited by different domains. The proposed method consists of four parts: a) Feature extractor, which uses deep neural networks to extract deep features from EEG data; b) Label predictor, which uses full-layer networks to predict subject labels; c) Marginal distribution adaptation, which uses maximum mean discrepancy (MMD) to reduce marginal distribution differences; d) Associative domain adaptation, which adapts to conditional distribution differences. Using the MLDA method, the cross-session and cross-emotion EEG-based individual identification problem is addressed by reducing the influence of time and emotion. Experimental results confirmed that the method outperforms other state-of-the-art approaches.
doi_str_mv 10.1109/JBHI.2023.3315974
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_pubmed_primary_37713231</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10251967</ieee_id><sourcerecordid>2865781996</sourcerecordid><originalsourceid>FETCH-LOGICAL-c302t-24792d163629dba44383638f1fefbe9ea081955e5047bda69e35db03477eaf1a3</originalsourceid><addsrcrecordid>eNpdkV1LwzAUhoMobsz9AEGk4I03nflok8a7fblNOrxQr0u6nEpGP2bTCv57U9aJGAg5vHnOS05ehK4JnhCC5cPzbL2ZUEzZhDESShGcoSElPPIpxdH5qSYyGKCxtXvsVuQkyS_RgAlBGGVkiD628WL66G3bvDF-XFnrLapCmdKbanVoqtrL3J7X7sJ_BWtNVXqq1L2yLKqmU5bLlT9TFrS3KbX5MrpVubfRUDYmMzvVMVfoIlO5hXF_jtD70_Jtvvbjl9VmPo39HcO08WkgJNWEM06lTlUQsIhxFmUkgywFCcrNIMMQQhyIVCsugYU6xSwQAlRGFBuh-6Pvoa4-W7BNUhi7gzxXJVStTWjEQ-E8JHfo3T90X7V16V7nKBlFEnNKHEWO1K4buYYsOdSmUPV3QnDSBZF0QSRdEEkfhOu57Z3btAD923H6dgfcHAEDAH8MaUgkF-wHA-SKRA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2898890621</pqid></control><display><type>article</type><title>MLDA: Multi-Loss Domain Adaptor for Cross-Session and Cross-Emotion EEG-Based Individual Identification</title><source>IEEE Electronic Library (IEL)</source><creator>Miao, Yifan ; Jiang, Wanqing ; Su, Nuo ; Shan, Jun ; Jiang, Tianzi ; Zuo, Nianming</creator><creatorcontrib>Miao, Yifan ; Jiang, Wanqing ; Su, Nuo ; Shan, Jun ; Jiang, Tianzi ; Zuo, Nianming</creatorcontrib><description>Traditional individual identification methods, such as face and fingerprint recognition, carry the risk of personal information leakage. The uniqueness and privacy of electroencephalograms (EEG) and the popularization of EEG acquisition devices have intensified research on EEG-based individual identification in recent years. However, most existing work uses EEG signals from a single session or emotion, ignoring large differences between domains. As EEG signals do not satisfy the traditional deep learning assumption that training and test sets are independently and identically distributed, it is difficult for trained models to maintain good classification performance for new sessions or new emotions. In this article, an individual identification method, called Multi-Loss Domain Adaptor (MLDA), is proposed to deal with the differences between marginal and conditional distributions elicited by different domains. The proposed method consists of four parts: a) Feature extractor, which uses deep neural networks to extract deep features from EEG data; b) Label predictor, which uses full-layer networks to predict subject labels; c) Marginal distribution adaptation, which uses maximum mean discrepancy (MMD) to reduce marginal distribution differences; d) Associative domain adaptation, which adapts to conditional distribution differences. Using the MLDA method, the cross-session and cross-emotion EEG-based individual identification problem is addressed by reducing the influence of time and emotion. Experimental results confirmed that the method outperforms other state-of-the-art approaches.</description><identifier>ISSN: 2168-2194</identifier><identifier>ISSN: 2168-2208</identifier><identifier>EISSN: 2168-2208</identifier><identifier>DOI: 10.1109/JBHI.2023.3315974</identifier><identifier>PMID: 37713231</identifier><identifier>CODEN: IJBHA9</identifier><language>eng</language><publisher>United States: IEEE</publisher><subject>across mental states ; across time ; Adaptation ; Adapters ; Adaptor proteins ; Algorithms ; Artificial neural networks ; biometric ; Biometric recognition systems ; Brain modeling ; Deep learning ; domain adaptation ; EEG ; Electroencephalography ; Electroencephalography - methods ; Emotions ; Feature extraction ; Fingerprint verification ; Humans ; Identification methods ; Labels ; Machine learning ; Motion pictures ; Neural networks ; Neural Networks, Computer ; Pattern recognition ; Recording ; Software ; Support vector machines ; Task analysis</subject><ispartof>IEEE journal of biomedical and health informatics, 2023-12, Vol.27 (12), p.5767-5778</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c302t-24792d163629dba44383638f1fefbe9ea081955e5047bda69e35db03477eaf1a3</cites><orcidid>0000-0001-9531-291X ; 0000-0002-6726-4575</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10251967$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>315,781,785,797,27928,27929,54762</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10251967$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/37713231$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Miao, Yifan</creatorcontrib><creatorcontrib>Jiang, Wanqing</creatorcontrib><creatorcontrib>Su, Nuo</creatorcontrib><creatorcontrib>Shan, Jun</creatorcontrib><creatorcontrib>Jiang, Tianzi</creatorcontrib><creatorcontrib>Zuo, Nianming</creatorcontrib><title>MLDA: Multi-Loss Domain Adaptor for Cross-Session and Cross-Emotion EEG-Based Individual Identification</title><title>IEEE journal of biomedical and health informatics</title><addtitle>JBHI</addtitle><addtitle>IEEE J Biomed Health Inform</addtitle><description>Traditional individual identification methods, such as face and fingerprint recognition, carry the risk of personal information leakage. The uniqueness and privacy of electroencephalograms (EEG) and the popularization of EEG acquisition devices have intensified research on EEG-based individual identification in recent years. However, most existing work uses EEG signals from a single session or emotion, ignoring large differences between domains. As EEG signals do not satisfy the traditional deep learning assumption that training and test sets are independently and identically distributed, it is difficult for trained models to maintain good classification performance for new sessions or new emotions. In this article, an individual identification method, called Multi-Loss Domain Adaptor (MLDA), is proposed to deal with the differences between marginal and conditional distributions elicited by different domains. The proposed method consists of four parts: a) Feature extractor, which uses deep neural networks to extract deep features from EEG data; b) Label predictor, which uses full-layer networks to predict subject labels; c) Marginal distribution adaptation, which uses maximum mean discrepancy (MMD) to reduce marginal distribution differences; d) Associative domain adaptation, which adapts to conditional distribution differences. Using the MLDA method, the cross-session and cross-emotion EEG-based individual identification problem is addressed by reducing the influence of time and emotion. Experimental results confirmed that the method outperforms other state-of-the-art approaches.</description><subject>across mental states</subject><subject>across time</subject><subject>Adaptation</subject><subject>Adapters</subject><subject>Adaptor proteins</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>biometric</subject><subject>Biometric recognition systems</subject><subject>Brain modeling</subject><subject>Deep learning</subject><subject>domain adaptation</subject><subject>EEG</subject><subject>Electroencephalography</subject><subject>Electroencephalography - methods</subject><subject>Emotions</subject><subject>Feature extraction</subject><subject>Fingerprint verification</subject><subject>Humans</subject><subject>Identification methods</subject><subject>Labels</subject><subject>Machine learning</subject><subject>Motion pictures</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Pattern recognition</subject><subject>Recording</subject><subject>Software</subject><subject>Support vector machines</subject><subject>Task analysis</subject><issn>2168-2194</issn><issn>2168-2208</issn><issn>2168-2208</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><sourceid>EIF</sourceid><recordid>eNpdkV1LwzAUhoMobsz9AEGk4I03nflok8a7fblNOrxQr0u6nEpGP2bTCv57U9aJGAg5vHnOS05ehK4JnhCC5cPzbL2ZUEzZhDESShGcoSElPPIpxdH5qSYyGKCxtXvsVuQkyS_RgAlBGGVkiD628WL66G3bvDF-XFnrLapCmdKbanVoqtrL3J7X7sJ_BWtNVXqq1L2yLKqmU5bLlT9TFrS3KbX5MrpVubfRUDYmMzvVMVfoIlO5hXF_jtD70_Jtvvbjl9VmPo39HcO08WkgJNWEM06lTlUQsIhxFmUkgywFCcrNIMMQQhyIVCsugYU6xSwQAlRGFBuh-6Pvoa4-W7BNUhi7gzxXJVStTWjEQ-E8JHfo3T90X7V16V7nKBlFEnNKHEWO1K4buYYsOdSmUPV3QnDSBZF0QSRdEEkfhOu57Z3btAD923H6dgfcHAEDAH8MaUgkF-wHA-SKRA</recordid><startdate>20231201</startdate><enddate>20231201</enddate><creator>Miao, Yifan</creator><creator>Jiang, Wanqing</creator><creator>Su, Nuo</creator><creator>Shan, Jun</creator><creator>Jiang, Tianzi</creator><creator>Zuo, Nianming</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7QF</scope><scope>7QO</scope><scope>7QQ</scope><scope>7SC</scope><scope>7SE</scope><scope>7SP</scope><scope>7SR</scope><scope>7TA</scope><scope>7TB</scope><scope>7U5</scope><scope>8BQ</scope><scope>8FD</scope><scope>F28</scope><scope>FR3</scope><scope>H8D</scope><scope>JG9</scope><scope>JQ2</scope><scope>K9.</scope><scope>KR7</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>NAPCQ</scope><scope>P64</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-9531-291X</orcidid><orcidid>https://orcid.org/0000-0002-6726-4575</orcidid></search><sort><creationdate>20231201</creationdate><title>MLDA: Multi-Loss Domain Adaptor for Cross-Session and Cross-Emotion EEG-Based Individual Identification</title><author>Miao, Yifan ; Jiang, Wanqing ; Su, Nuo ; Shan, Jun ; Jiang, Tianzi ; Zuo, Nianming</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c302t-24792d163629dba44383638f1fefbe9ea081955e5047bda69e35db03477eaf1a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>across mental states</topic><topic>across time</topic><topic>Adaptation</topic><topic>Adapters</topic><topic>Adaptor proteins</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>biometric</topic><topic>Biometric recognition systems</topic><topic>Brain modeling</topic><topic>Deep learning</topic><topic>domain adaptation</topic><topic>EEG</topic><topic>Electroencephalography</topic><topic>Electroencephalography - methods</topic><topic>Emotions</topic><topic>Feature extraction</topic><topic>Fingerprint verification</topic><topic>Humans</topic><topic>Identification methods</topic><topic>Labels</topic><topic>Machine learning</topic><topic>Motion pictures</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Pattern recognition</topic><topic>Recording</topic><topic>Software</topic><topic>Support vector machines</topic><topic>Task analysis</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Miao, Yifan</creatorcontrib><creatorcontrib>Jiang, Wanqing</creatorcontrib><creatorcontrib>Su, Nuo</creatorcontrib><creatorcontrib>Shan, Jun</creatorcontrib><creatorcontrib>Jiang, Tianzi</creatorcontrib><creatorcontrib>Zuo, Nianming</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>Aluminium Industry Abstracts</collection><collection>Biotechnology Research Abstracts</collection><collection>Ceramic Abstracts</collection><collection>Computer and Information Systems Abstracts</collection><collection>Corrosion Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>Materials Business File</collection><collection>Mechanical &amp; Transportation Engineering Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>ANTE: Abstracts in New Technology &amp; Engineering</collection><collection>Engineering Research Database</collection><collection>Aerospace Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Civil Engineering Abstracts</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>Nursing &amp; Allied Health Premium</collection><collection>Biotechnology and BioEngineering Abstracts</collection><collection>MEDLINE - Academic</collection><jtitle>IEEE journal of biomedical and health informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Miao, Yifan</au><au>Jiang, Wanqing</au><au>Su, Nuo</au><au>Shan, Jun</au><au>Jiang, Tianzi</au><au>Zuo, Nianming</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>MLDA: Multi-Loss Domain Adaptor for Cross-Session and Cross-Emotion EEG-Based Individual Identification</atitle><jtitle>IEEE journal of biomedical and health informatics</jtitle><stitle>JBHI</stitle><addtitle>IEEE J Biomed Health Inform</addtitle><date>2023-12-01</date><risdate>2023</risdate><volume>27</volume><issue>12</issue><spage>5767</spage><epage>5778</epage><pages>5767-5778</pages><issn>2168-2194</issn><issn>2168-2208</issn><eissn>2168-2208</eissn><coden>IJBHA9</coden><abstract>Traditional individual identification methods, such as face and fingerprint recognition, carry the risk of personal information leakage. The uniqueness and privacy of electroencephalograms (EEG) and the popularization of EEG acquisition devices have intensified research on EEG-based individual identification in recent years. However, most existing work uses EEG signals from a single session or emotion, ignoring large differences between domains. As EEG signals do not satisfy the traditional deep learning assumption that training and test sets are independently and identically distributed, it is difficult for trained models to maintain good classification performance for new sessions or new emotions. In this article, an individual identification method, called Multi-Loss Domain Adaptor (MLDA), is proposed to deal with the differences between marginal and conditional distributions elicited by different domains. The proposed method consists of four parts: a) Feature extractor, which uses deep neural networks to extract deep features from EEG data; b) Label predictor, which uses full-layer networks to predict subject labels; c) Marginal distribution adaptation, which uses maximum mean discrepancy (MMD) to reduce marginal distribution differences; d) Associative domain adaptation, which adapts to conditional distribution differences. Using the MLDA method, the cross-session and cross-emotion EEG-based individual identification problem is addressed by reducing the influence of time and emotion. Experimental results confirmed that the method outperforms other state-of-the-art approaches.</abstract><cop>United States</cop><pub>IEEE</pub><pmid>37713231</pmid><doi>10.1109/JBHI.2023.3315974</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-9531-291X</orcidid><orcidid>https://orcid.org/0000-0002-6726-4575</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 2168-2194
ispartof IEEE journal of biomedical and health informatics, 2023-12, Vol.27 (12), p.5767-5778
issn 2168-2194
2168-2208
2168-2208
language eng
recordid cdi_pubmed_primary_37713231
source IEEE Electronic Library (IEL)
subjects across mental states
across time
Adaptation
Adapters
Adaptor proteins
Algorithms
Artificial neural networks
biometric
Biometric recognition systems
Brain modeling
Deep learning
domain adaptation
EEG
Electroencephalography
Electroencephalography - methods
Emotions
Feature extraction
Fingerprint verification
Humans
Identification methods
Labels
Machine learning
Motion pictures
Neural networks
Neural Networks, Computer
Pattern recognition
Recording
Software
Support vector machines
Task analysis
title MLDA: Multi-Loss Domain Adaptor for Cross-Session and Cross-Emotion EEG-Based Individual Identification
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-17T08%3A01%3A52IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=MLDA:%20Multi-Loss%20Domain%20Adaptor%20for%20Cross-Session%20and%20Cross-Emotion%20EEG-Based%20Individual%20Identification&rft.jtitle=IEEE%20journal%20of%20biomedical%20and%20health%20informatics&rft.au=Miao,%20Yifan&rft.date=2023-12-01&rft.volume=27&rft.issue=12&rft.spage=5767&rft.epage=5778&rft.pages=5767-5778&rft.issn=2168-2194&rft.eissn=2168-2208&rft.coden=IJBHA9&rft_id=info:doi/10.1109/JBHI.2023.3315974&rft_dat=%3Cproquest_RIE%3E2865781996%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2898890621&rft_id=info:pmid/37713231&rft_ieee_id=10251967&rfr_iscdi=true