A force levels and gestures integrated multi-task strategy for neural decoding

This paper discusses the problem of decoding gestures represented by surface electromyography (sEMG) signals in the presence of variable force levels. It is an attempt that multi-task learning (MTL) is proposed to recognize gestures and force levels synchronously. First, methods of gesture recogniti...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Complex & Intelligent Systems 2020-10, Vol.6 (3), p.469-478
Hauptverfasser: Hua, Shaoyang, Wang, Congqing, Xie, Zuoshu, Wu, Xuewei
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 478
container_issue 3
container_start_page 469
container_title Complex & Intelligent Systems
container_volume 6
creator Hua, Shaoyang
Wang, Congqing
Xie, Zuoshu
Wu, Xuewei
description This paper discusses the problem of decoding gestures represented by surface electromyography (sEMG) signals in the presence of variable force levels. It is an attempt that multi-task learning (MTL) is proposed to recognize gestures and force levels synchronously. First, methods of gesture recognition with different force levels are investigated. Then, MTL framework is presented to improve the gesture recognition performance and give information about force levels. Last but not least, to solve the problem that using the greedy principle in MTL, a modified pseudo-task augmentation (PTA) trajectory is introduced. Experiments conducted on two representative datasets demonstrate that compared with other methods, frequency domain information with convolutional neural network (CNN) is more suitable for gesture recognition with variable force levels. Besides, the feasibility of extracting features that are closely related to both gestures and force levels is verified via MTL. By influencing learning dynamics, the proposed PTA method can improve the results of all tasks, and make it applicable to the case where the main tasks and auxiliary tasks are clear.
doi_str_mv 10.1007/s40747-020-00140-9
format Article
fullrecord <record><control><sourceid>gale_proqu</sourceid><recordid>TN_cdi_proquest_journals_2440211550</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><galeid>A634554922</galeid><sourcerecordid>A634554922</sourcerecordid><originalsourceid>FETCH-LOGICAL-c366t-6176a52096644cff126c8b969942d1dbd09a7f2305313f064b71b42173f14acd3</originalsourceid><addsrcrecordid>eNp9kMtqwzAQRU1poSHND3Ql6Frp6GEpWobQF4R2066FrIdx6tipZBfy91XiQndlFjMM98xcblHcElgSAHmfOEguMVDAAIQDVhfFjBK1wgJKdnmeFeYlE9fFIqUdZJWUKwZ0VryuUeij9aj1375NyHQO1T4NY_QJNd3g62gG79B-bIcGDyZ9ojScVvXxBKLOj9G0yHnbu6arb4qrYNrkF799Xnw8PrxvnvH27ells95iy4QYsCBSmJKCEoJzGwKhwq4qJZTi1BFXOVBGBsqyfcICCF5JUnFKJAuEG-vYvLib7h5i_zVmv3rXj7HLLzXlHCghZQlZtZxUtWm9brrQZ-s2l_P7xvadD03erwXjZckVpRmgE2Bjn1L0QR9iszfxqAnoU9Z6ylrnrPU5a60yxCYoZXFX-_jn5R_qB_Xsf7c</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2440211550</pqid></control><display><type>article</type><title>A force levels and gestures integrated multi-task strategy for neural decoding</title><source>DOAJ Directory of Open Access Journals</source><source>Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals</source><source>SpringerLink Journals - AutoHoldings</source><source>Springer Nature OA Free Journals</source><creator>Hua, Shaoyang ; Wang, Congqing ; Xie, Zuoshu ; Wu, Xuewei</creator><creatorcontrib>Hua, Shaoyang ; Wang, Congqing ; Xie, Zuoshu ; Wu, Xuewei</creatorcontrib><description>This paper discusses the problem of decoding gestures represented by surface electromyography (sEMG) signals in the presence of variable force levels. It is an attempt that multi-task learning (MTL) is proposed to recognize gestures and force levels synchronously. First, methods of gesture recognition with different force levels are investigated. Then, MTL framework is presented to improve the gesture recognition performance and give information about force levels. Last but not least, to solve the problem that using the greedy principle in MTL, a modified pseudo-task augmentation (PTA) trajectory is introduced. Experiments conducted on two representative datasets demonstrate that compared with other methods, frequency domain information with convolutional neural network (CNN) is more suitable for gesture recognition with variable force levels. Besides, the feasibility of extracting features that are closely related to both gestures and force levels is verified via MTL. By influencing learning dynamics, the proposed PTA method can improve the results of all tasks, and make it applicable to the case where the main tasks and auxiliary tasks are clear.</description><identifier>ISSN: 2199-4536</identifier><identifier>EISSN: 2198-6053</identifier><identifier>DOI: 10.1007/s40747-020-00140-9</identifier><language>eng</language><publisher>Cham: Springer International Publishing</publisher><subject>Artificial neural networks ; Comparative analysis ; Complexity ; Computational Intelligence ; Data Structures and Information Theory ; Electromyography ; Engineering ; Feature extraction ; Gesture recognition ; Learning ; Military readiness ; Military strategy ; Original Article</subject><ispartof>Complex &amp; Intelligent Systems, 2020-10, Vol.6 (3), p.469-478</ispartof><rights>The Author(s) 2020</rights><rights>COPYRIGHT 2020 Springer</rights><rights>The Author(s) 2020. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c366t-6176a52096644cff126c8b969942d1dbd09a7f2305313f064b71b42173f14acd3</citedby><cites>FETCH-LOGICAL-c366t-6176a52096644cff126c8b969942d1dbd09a7f2305313f064b71b42173f14acd3</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s40747-020-00140-9$$EPDF$$P50$$Gspringer$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://doi.org/10.1007/s40747-020-00140-9$$EHTML$$P50$$Gspringer$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,27901,27902,41096,41464,42165,42533,51294,51551</link.rule.ids></links><search><creatorcontrib>Hua, Shaoyang</creatorcontrib><creatorcontrib>Wang, Congqing</creatorcontrib><creatorcontrib>Xie, Zuoshu</creatorcontrib><creatorcontrib>Wu, Xuewei</creatorcontrib><title>A force levels and gestures integrated multi-task strategy for neural decoding</title><title>Complex &amp; Intelligent Systems</title><addtitle>Complex Intell. Syst</addtitle><description>This paper discusses the problem of decoding gestures represented by surface electromyography (sEMG) signals in the presence of variable force levels. It is an attempt that multi-task learning (MTL) is proposed to recognize gestures and force levels synchronously. First, methods of gesture recognition with different force levels are investigated. Then, MTL framework is presented to improve the gesture recognition performance and give information about force levels. Last but not least, to solve the problem that using the greedy principle in MTL, a modified pseudo-task augmentation (PTA) trajectory is introduced. Experiments conducted on two representative datasets demonstrate that compared with other methods, frequency domain information with convolutional neural network (CNN) is more suitable for gesture recognition with variable force levels. Besides, the feasibility of extracting features that are closely related to both gestures and force levels is verified via MTL. By influencing learning dynamics, the proposed PTA method can improve the results of all tasks, and make it applicable to the case where the main tasks and auxiliary tasks are clear.</description><subject>Artificial neural networks</subject><subject>Comparative analysis</subject><subject>Complexity</subject><subject>Computational Intelligence</subject><subject>Data Structures and Information Theory</subject><subject>Electromyography</subject><subject>Engineering</subject><subject>Feature extraction</subject><subject>Gesture recognition</subject><subject>Learning</subject><subject>Military readiness</subject><subject>Military strategy</subject><subject>Original Article</subject><issn>2199-4536</issn><issn>2198-6053</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>C6C</sourceid><sourceid>BENPR</sourceid><recordid>eNp9kMtqwzAQRU1poSHND3Ql6Frp6GEpWobQF4R2066FrIdx6tipZBfy91XiQndlFjMM98xcblHcElgSAHmfOEguMVDAAIQDVhfFjBK1wgJKdnmeFeYlE9fFIqUdZJWUKwZ0VryuUeij9aj1375NyHQO1T4NY_QJNd3g62gG79B-bIcGDyZ9ojScVvXxBKLOj9G0yHnbu6arb4qrYNrkF799Xnw8PrxvnvH27ells95iy4QYsCBSmJKCEoJzGwKhwq4qJZTi1BFXOVBGBsqyfcICCF5JUnFKJAuEG-vYvLib7h5i_zVmv3rXj7HLLzXlHCghZQlZtZxUtWm9brrQZ-s2l_P7xvadD03erwXjZckVpRmgE2Bjn1L0QR9iszfxqAnoU9Z6ylrnrPU5a60yxCYoZXFX-_jn5R_qB_Xsf7c</recordid><startdate>20201001</startdate><enddate>20201001</enddate><creator>Hua, Shaoyang</creator><creator>Wang, Congqing</creator><creator>Xie, Zuoshu</creator><creator>Wu, Xuewei</creator><general>Springer International Publishing</general><general>Springer</general><general>Springer Nature B.V</general><scope>C6C</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>IAO</scope><scope>8FE</scope><scope>8FG</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>P5Z</scope><scope>P62</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope></search><sort><creationdate>20201001</creationdate><title>A force levels and gestures integrated multi-task strategy for neural decoding</title><author>Hua, Shaoyang ; Wang, Congqing ; Xie, Zuoshu ; Wu, Xuewei</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c366t-6176a52096644cff126c8b969942d1dbd09a7f2305313f064b71b42173f14acd3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Artificial neural networks</topic><topic>Comparative analysis</topic><topic>Complexity</topic><topic>Computational Intelligence</topic><topic>Data Structures and Information Theory</topic><topic>Electromyography</topic><topic>Engineering</topic><topic>Feature extraction</topic><topic>Gesture recognition</topic><topic>Learning</topic><topic>Military readiness</topic><topic>Military strategy</topic><topic>Original Article</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Hua, Shaoyang</creatorcontrib><creatorcontrib>Wang, Congqing</creatorcontrib><creatorcontrib>Xie, Zuoshu</creatorcontrib><creatorcontrib>Wu, Xuewei</creatorcontrib><collection>Springer Nature OA Free Journals</collection><collection>CrossRef</collection><collection>Gale Academic OneFile</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><jtitle>Complex &amp; Intelligent Systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hua, Shaoyang</au><au>Wang, Congqing</au><au>Xie, Zuoshu</au><au>Wu, Xuewei</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A force levels and gestures integrated multi-task strategy for neural decoding</atitle><jtitle>Complex &amp; Intelligent Systems</jtitle><stitle>Complex Intell. Syst</stitle><date>2020-10-01</date><risdate>2020</risdate><volume>6</volume><issue>3</issue><spage>469</spage><epage>478</epage><pages>469-478</pages><issn>2199-4536</issn><eissn>2198-6053</eissn><abstract>This paper discusses the problem of decoding gestures represented by surface electromyography (sEMG) signals in the presence of variable force levels. It is an attempt that multi-task learning (MTL) is proposed to recognize gestures and force levels synchronously. First, methods of gesture recognition with different force levels are investigated. Then, MTL framework is presented to improve the gesture recognition performance and give information about force levels. Last but not least, to solve the problem that using the greedy principle in MTL, a modified pseudo-task augmentation (PTA) trajectory is introduced. Experiments conducted on two representative datasets demonstrate that compared with other methods, frequency domain information with convolutional neural network (CNN) is more suitable for gesture recognition with variable force levels. Besides, the feasibility of extracting features that are closely related to both gestures and force levels is verified via MTL. By influencing learning dynamics, the proposed PTA method can improve the results of all tasks, and make it applicable to the case where the main tasks and auxiliary tasks are clear.</abstract><cop>Cham</cop><pub>Springer International Publishing</pub><doi>10.1007/s40747-020-00140-9</doi><tpages>10</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2199-4536
ispartof Complex & Intelligent Systems, 2020-10, Vol.6 (3), p.469-478
issn 2199-4536
2198-6053
language eng
recordid cdi_proquest_journals_2440211550
source DOAJ Directory of Open Access Journals; Elektronische Zeitschriftenbibliothek - Frei zugängliche E-Journals; SpringerLink Journals - AutoHoldings; Springer Nature OA Free Journals
subjects Artificial neural networks
Comparative analysis
Complexity
Computational Intelligence
Data Structures and Information Theory
Electromyography
Engineering
Feature extraction
Gesture recognition
Learning
Military readiness
Military strategy
Original Article
title A force levels and gestures integrated multi-task strategy for neural decoding
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-03T15%3A09%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-gale_proqu&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20force%20levels%20and%20gestures%20integrated%20multi-task%20strategy%20for%20neural%20decoding&rft.jtitle=Complex%20&%20Intelligent%20Systems&rft.au=Hua,%20Shaoyang&rft.date=2020-10-01&rft.volume=6&rft.issue=3&rft.spage=469&rft.epage=478&rft.pages=469-478&rft.issn=2199-4536&rft.eissn=2198-6053&rft_id=info:doi/10.1007/s40747-020-00140-9&rft_dat=%3Cgale_proqu%3EA634554922%3C/gale_proqu%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2440211550&rft_id=info:pmid/&rft_galeid=A634554922&rfr_iscdi=true