EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition

Electroencephalography (EEG) is an objective tool for emotion recognition and shows promising performance. However, the label scarcity problem is a main challenge in this field, which limits the wide application of EEG-based emotion recognition. In this paper, we propose a novel semi-supervised lear...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-08
Hauptverfasser: Zhou, Rushuang, Ye, Weishan, Zhang, Zhiguo, Luo, Yanyang, Zhang, Li, Li, Linling, Huang, Gan, Dong, Yining, Yuan-Ting, Zhang, Liang, Zhen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhou, Rushuang
Ye, Weishan
Zhang, Zhiguo
Luo, Yanyang
Zhang, Li
Li, Linling
Huang, Gan
Dong, Yining
Yuan-Ting, Zhang
Liang, Zhen
description Electroencephalography (EEG) is an objective tool for emotion recognition and shows promising performance. However, the label scarcity problem is a main challenge in this field, which limits the wide application of EEG-based emotion recognition. In this paper, we propose a novel semi-supervised learning framework (EEGMatch) to leverage both labeled and unlabeled EEG data. First, an EEG-Mixup based data augmentation method is developed to generate more valid samples for model learning. Second, a semi-supervised two-step pairwise learning method is proposed to bridge prototype-wise and instance-wise pairwise learning, where the prototype-wise pairwise learning measures the global relationship between EEG data and the prototypical representation of each emotion class and the instance-wise pairwise learning captures the local intrinsic relationship among EEG data. Third, a semi-supervised multi-domain adaptation is introduced to align the data representation among multiple domains (labeled source domain, unlabeled source domain, and target domain), where the distribution mismatch is alleviated. Extensive experiments are conducted on two benchmark databases (SEED and SEED-IV) under a cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show the proposed EEGmatch performs better than the state-of-the-art methods under different incomplete label conditions (with 6.89% improvement on SEED and 1.44% improvement on SEED-IV), which demonstrates the effectiveness of the proposed EEGMatch in dealing with the label scarcity problem in emotion recognition using EEG signals. The source code is available at https://github.com/KAZABANA/EEGMatch.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2801013402</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2801013402</sourcerecordid><originalsourceid>FETCH-proquest_journals_28010134023</originalsourceid><addsrcrecordid>eNqNi8sKwjAUBYMgKNp_uOC6kCZWi1uJD9CNupc03rYpbVKTVH_fCn6AqzMwc0ZkyjhP4mzJ2IRE3teUUrZaszTlU6KF2J9lUNUGTiid0aaEtw4VHI2ybddgQDjJHBsPhXVwxVbH175D99IeHzC841x-aeus94PKa1QBRGuDtgYuqGxp9JfnZFzIxmP02xlZ7MRte4g7Z589-nCvbe_MoO4sowlN-JIy_l_1AfzNRys</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2801013402</pqid></control><display><type>article</type><title>EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition</title><source>Free E- Journals</source><creator>Zhou, Rushuang ; Ye, Weishan ; Zhang, Zhiguo ; Luo, Yanyang ; Zhang, Li ; Li, Linling ; Huang, Gan ; Dong, Yining ; Yuan-Ting, Zhang ; Liang, Zhen</creator><creatorcontrib>Zhou, Rushuang ; Ye, Weishan ; Zhang, Zhiguo ; Luo, Yanyang ; Zhang, Li ; Li, Linling ; Huang, Gan ; Dong, Yining ; Yuan-Ting, Zhang ; Liang, Zhen</creatorcontrib><description>Electroencephalography (EEG) is an objective tool for emotion recognition and shows promising performance. However, the label scarcity problem is a main challenge in this field, which limits the wide application of EEG-based emotion recognition. In this paper, we propose a novel semi-supervised learning framework (EEGMatch) to leverage both labeled and unlabeled EEG data. First, an EEG-Mixup based data augmentation method is developed to generate more valid samples for model learning. Second, a semi-supervised two-step pairwise learning method is proposed to bridge prototype-wise and instance-wise pairwise learning, where the prototype-wise pairwise learning measures the global relationship between EEG data and the prototypical representation of each emotion class and the instance-wise pairwise learning captures the local intrinsic relationship among EEG data. Third, a semi-supervised multi-domain adaptation is introduced to align the data representation among multiple domains (labeled source domain, unlabeled source domain, and target domain), where the distribution mismatch is alleviated. Extensive experiments are conducted on two benchmark databases (SEED and SEED-IV) under a cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show the proposed EEGmatch performs better than the state-of-the-art methods under different incomplete label conditions (with 6.89% improvement on SEED and 1.44% improvement on SEED-IV), which demonstrates the effectiveness of the proposed EEGMatch in dealing with the label scarcity problem in emotion recognition using EEG signals. The source code is available at https://github.com/KAZABANA/EEGMatch.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Data augmentation ; Domains ; Electroencephalography ; Emotion recognition ; Emotions ; Machine learning ; Prototypes ; Representations ; Semi-supervised learning ; Source code</subject><ispartof>arXiv.org, 2024-08</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>777,781</link.rule.ids></links><search><creatorcontrib>Zhou, Rushuang</creatorcontrib><creatorcontrib>Ye, Weishan</creatorcontrib><creatorcontrib>Zhang, Zhiguo</creatorcontrib><creatorcontrib>Luo, Yanyang</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Li, Linling</creatorcontrib><creatorcontrib>Huang, Gan</creatorcontrib><creatorcontrib>Dong, Yining</creatorcontrib><creatorcontrib>Yuan-Ting, Zhang</creatorcontrib><creatorcontrib>Liang, Zhen</creatorcontrib><title>EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition</title><title>arXiv.org</title><description>Electroencephalography (EEG) is an objective tool for emotion recognition and shows promising performance. However, the label scarcity problem is a main challenge in this field, which limits the wide application of EEG-based emotion recognition. In this paper, we propose a novel semi-supervised learning framework (EEGMatch) to leverage both labeled and unlabeled EEG data. First, an EEG-Mixup based data augmentation method is developed to generate more valid samples for model learning. Second, a semi-supervised two-step pairwise learning method is proposed to bridge prototype-wise and instance-wise pairwise learning, where the prototype-wise pairwise learning measures the global relationship between EEG data and the prototypical representation of each emotion class and the instance-wise pairwise learning captures the local intrinsic relationship among EEG data. Third, a semi-supervised multi-domain adaptation is introduced to align the data representation among multiple domains (labeled source domain, unlabeled source domain, and target domain), where the distribution mismatch is alleviated. Extensive experiments are conducted on two benchmark databases (SEED and SEED-IV) under a cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show the proposed EEGmatch performs better than the state-of-the-art methods under different incomplete label conditions (with 6.89% improvement on SEED and 1.44% improvement on SEED-IV), which demonstrates the effectiveness of the proposed EEGMatch in dealing with the label scarcity problem in emotion recognition using EEG signals. The source code is available at https://github.com/KAZABANA/EEGMatch.</description><subject>Data augmentation</subject><subject>Domains</subject><subject>Electroencephalography</subject><subject>Emotion recognition</subject><subject>Emotions</subject><subject>Machine learning</subject><subject>Prototypes</subject><subject>Representations</subject><subject>Semi-supervised learning</subject><subject>Source code</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi8sKwjAUBYMgKNp_uOC6kCZWi1uJD9CNupc03rYpbVKTVH_fCn6AqzMwc0ZkyjhP4mzJ2IRE3teUUrZaszTlU6KF2J9lUNUGTiid0aaEtw4VHI2ybddgQDjJHBsPhXVwxVbH175D99IeHzC841x-aeus94PKa1QBRGuDtgYuqGxp9JfnZFzIxmP02xlZ7MRte4g7Z589-nCvbe_MoO4sowlN-JIy_l_1AfzNRys</recordid><startdate>20240830</startdate><enddate>20240830</enddate><creator>Zhou, Rushuang</creator><creator>Ye, Weishan</creator><creator>Zhang, Zhiguo</creator><creator>Luo, Yanyang</creator><creator>Zhang, Li</creator><creator>Li, Linling</creator><creator>Huang, Gan</creator><creator>Dong, Yining</creator><creator>Yuan-Ting, Zhang</creator><creator>Liang, Zhen</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240830</creationdate><title>EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition</title><author>Zhou, Rushuang ; Ye, Weishan ; Zhang, Zhiguo ; Luo, Yanyang ; Zhang, Li ; Li, Linling ; Huang, Gan ; Dong, Yining ; Yuan-Ting, Zhang ; Liang, Zhen</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28010134023</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Data augmentation</topic><topic>Domains</topic><topic>Electroencephalography</topic><topic>Emotion recognition</topic><topic>Emotions</topic><topic>Machine learning</topic><topic>Prototypes</topic><topic>Representations</topic><topic>Semi-supervised learning</topic><topic>Source code</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhou, Rushuang</creatorcontrib><creatorcontrib>Ye, Weishan</creatorcontrib><creatorcontrib>Zhang, Zhiguo</creatorcontrib><creatorcontrib>Luo, Yanyang</creatorcontrib><creatorcontrib>Zhang, Li</creatorcontrib><creatorcontrib>Li, Linling</creatorcontrib><creatorcontrib>Huang, Gan</creatorcontrib><creatorcontrib>Dong, Yining</creatorcontrib><creatorcontrib>Yuan-Ting, Zhang</creatorcontrib><creatorcontrib>Liang, Zhen</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhou, Rushuang</au><au>Ye, Weishan</au><au>Zhang, Zhiguo</au><au>Luo, Yanyang</au><au>Zhang, Li</au><au>Li, Linling</au><au>Huang, Gan</au><au>Dong, Yining</au><au>Yuan-Ting, Zhang</au><au>Liang, Zhen</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition</atitle><jtitle>arXiv.org</jtitle><date>2024-08-30</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Electroencephalography (EEG) is an objective tool for emotion recognition and shows promising performance. However, the label scarcity problem is a main challenge in this field, which limits the wide application of EEG-based emotion recognition. In this paper, we propose a novel semi-supervised learning framework (EEGMatch) to leverage both labeled and unlabeled EEG data. First, an EEG-Mixup based data augmentation method is developed to generate more valid samples for model learning. Second, a semi-supervised two-step pairwise learning method is proposed to bridge prototype-wise and instance-wise pairwise learning, where the prototype-wise pairwise learning measures the global relationship between EEG data and the prototypical representation of each emotion class and the instance-wise pairwise learning captures the local intrinsic relationship among EEG data. Third, a semi-supervised multi-domain adaptation is introduced to align the data representation among multiple domains (labeled source domain, unlabeled source domain, and target domain), where the distribution mismatch is alleviated. Extensive experiments are conducted on two benchmark databases (SEED and SEED-IV) under a cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show the proposed EEGmatch performs better than the state-of-the-art methods under different incomplete label conditions (with 6.89% improvement on SEED and 1.44% improvement on SEED-IV), which demonstrates the effectiveness of the proposed EEGMatch in dealing with the label scarcity problem in emotion recognition using EEG signals. The source code is available at https://github.com/KAZABANA/EEGMatch.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-08
issn 2331-8422
language eng
recordid cdi_proquest_journals_2801013402
source Free E- Journals
subjects Data augmentation
Domains
Electroencephalography
Emotion recognition
Emotions
Machine learning
Prototypes
Representations
Semi-supervised learning
Source code
title EEGMatch: Learning with Incomplete Labels for Semi-Supervised EEG-based Cross-Subject Emotion Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T12%3A36%3A41IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=EEGMatch:%20Learning%20with%20Incomplete%20Labels%20for%20Semi-Supervised%20EEG-based%20Cross-Subject%20Emotion%20Recognition&rft.jtitle=arXiv.org&rft.au=Zhou,%20Rushuang&rft.date=2024-08-30&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2801013402%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2801013402&rft_id=info:pmid/&rfr_iscdi=true