Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs)
This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutiona...
Gespeichert in:
Veröffentlicht in: | IEEE sensors journal 2024-06, Vol.24 (11), p.18633-18645 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 18645 |
---|---|
container_issue | 11 |
container_start_page | 18633 |
container_title | IEEE sensors journal |
container_volume | 24 |
creator | Zhang, Shen Zhou, Hao Tchantchane, Rayane Alici, Gursel |
description | This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutional neural network models (CNNs) were evaluated for accurately recognizing hand gestures. A comprehensive investigation was conducted, encompassing feature-level and decision-level CNN models, alongside advanced fusion techniques to enhance the recognition performance. This research consistently demonstrates the superiority of CNN models, revealing their potential in extracting intricate patterns from raw multimodal sensor data. The study showcased significant accuracy improvements over single-modality approaches, emphasizing the synergistic effects of multimodal sensing. Notably, the CNN models achieved and 88.34% accuracy for self-adaptive decision-level fusion and 87.79% accuracy for feature-level fusion, outperforming the linear discriminant analysis (LDA) with 83.33% accuracy when considering all nine gestures. Furthermore, the study explores the relationship between the number of hand gestures and recognition accuracy, revealing consistently high accuracy levels ranging from 88% to 100% for 2-9 gestures and a remarkable 98% accuracy for the commonly used five gestures. This research underscores the adaptability of CNNs in effectively capturing the complex complementation between multimodal data and varying limb positions, advancing the field of gesture recognition, and emphasizing the potential of high-level data-fusion deep learning (DL) techniques in wearable sensing systems. This study provides valuable contributions to how multimodal sensor/data fusion, coupled with advanced ML methods, enhances hand gesture recognition accuracy, ultimately paving the way for more effective and adaptable wearable technology applications. |
doi_str_mv | 10.1109/JSEN.2024.3389963 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_ieee_primary_10506463</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10506463</ieee_id><sourcerecordid>3064703311</sourcerecordid><originalsourceid>FETCH-LOGICAL-c246t-78bf758c36a3de1da13822b93302088415eb115da44b3123d09293d7a82a700f3</originalsourceid><addsrcrecordid>eNpNkc9OwzAMxisEEmPwAEgcInGBQ0cSt0t6HGV_QGMgxhC3Kl3SKaNrRtIO7X14UNptB0627M8_W_4875LgDiE4unua9icdimnQAeBR1IUjr0XCkPuEBfy4yQH7AbDPU-_MuSXGJGIha3m_I1FINFSurKxCb2puFoUutSlQb26Nc-hDWG0qh8Z6laJX43ZNh2ZOFwsk0HOVl3plpMjRVBW74nTrSrVC98IpiWrQVOWZ35NiXeqNQg-iFP6gcs2KZnVsio3Jq4ZaMyaqsrtQ_hj75dBNPJm423PvJBO5UxeH2PZmg_57PPLHL8PHuDf25zTolj7jacZCPoeuAKmIFAQ4pWkEgCnmPCChSgkJpQiCFAgFiSMagWSCU8EwzqDtXe-5a2u-q_onydJUtr7LJYC7AcMAhNQqslftHmRVlqytXgm7TQhOGjOSxoykMSM5mFHPXO1ntFLqnz6ssXX7D5mUhtM</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3064703311</pqid></control><display><type>article</type><title>Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs)</title><source>IEEE Electronic Library (IEL)</source><creator>Zhang, Shen ; Zhou, Hao ; Tchantchane, Rayane ; Alici, Gursel</creator><creatorcontrib>Zhang, Shen ; Zhou, Hao ; Tchantchane, Rayane ; Alici, Gursel</creatorcontrib><description>This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutional neural network models (CNNs) were evaluated for accurately recognizing hand gestures. A comprehensive investigation was conducted, encompassing feature-level and decision-level CNN models, alongside advanced fusion techniques to enhance the recognition performance. This research consistently demonstrates the superiority of CNN models, revealing their potential in extracting intricate patterns from raw multimodal sensor data. The study showcased significant accuracy improvements over single-modality approaches, emphasizing the synergistic effects of multimodal sensing. Notably, the CNN models achieved and 88.34% accuracy for self-adaptive decision-level fusion and 87.79% accuracy for feature-level fusion, outperforming the linear discriminant analysis (LDA) with 83.33% accuracy when considering all nine gestures. Furthermore, the study explores the relationship between the number of hand gestures and recognition accuracy, revealing consistently high accuracy levels ranging from 88% to 100% for 2-9 gestures and a remarkable 98% accuracy for the commonly used five gestures. This research underscores the adaptability of CNNs in effectively capturing the complex complementation between multimodal data and varying limb positions, advancing the field of gesture recognition, and emphasizing the potential of high-level data-fusion deep learning (DL) techniques in wearable sensing systems. This study provides valuable contributions to how multimodal sensor/data fusion, coupled with advanced ML methods, enhances hand gesture recognition accuracy, ultimately paving the way for more effective and adaptable wearable technology applications.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2024.3389963</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Classification algorithms ; Convolutional neural networks ; Data fusion ; Data integration ; Deep learning ; deep learning (DL) ; Discriminant analysis ; Electromyography ; Feature extraction ; Gesture recognition ; hand gesture recognition ; human–machine interface (HMI) ; limb position effect ; Machine learning ; multimodal sensing ; Multisensor fusion ; Neural networks ; sensor fusion ; Sensors ; Synergistic effect ; Training ; Wearable technology</subject><ispartof>IEEE sensors journal, 2024-06, Vol.24 (11), p.18633-18645</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c246t-78bf758c36a3de1da13822b93302088415eb115da44b3123d09293d7a82a700f3</cites><orcidid>0000-0001-9500-5098 ; 0000-0002-3530-4747 ; 0009-0008-8663-6451 ; 0000-0001-6527-2881</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10506463$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,776,780,792,27903,27904,54737</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10506463$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Zhang, Shen</creatorcontrib><creatorcontrib>Zhou, Hao</creatorcontrib><creatorcontrib>Tchantchane, Rayane</creatorcontrib><creatorcontrib>Alici, Gursel</creatorcontrib><title>Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs)</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutional neural network models (CNNs) were evaluated for accurately recognizing hand gestures. A comprehensive investigation was conducted, encompassing feature-level and decision-level CNN models, alongside advanced fusion techniques to enhance the recognition performance. This research consistently demonstrates the superiority of CNN models, revealing their potential in extracting intricate patterns from raw multimodal sensor data. The study showcased significant accuracy improvements over single-modality approaches, emphasizing the synergistic effects of multimodal sensing. Notably, the CNN models achieved and 88.34% accuracy for self-adaptive decision-level fusion and 87.79% accuracy for feature-level fusion, outperforming the linear discriminant analysis (LDA) with 83.33% accuracy when considering all nine gestures. Furthermore, the study explores the relationship between the number of hand gestures and recognition accuracy, revealing consistently high accuracy levels ranging from 88% to 100% for 2-9 gestures and a remarkable 98% accuracy for the commonly used five gestures. This research underscores the adaptability of CNNs in effectively capturing the complex complementation between multimodal data and varying limb positions, advancing the field of gesture recognition, and emphasizing the potential of high-level data-fusion deep learning (DL) techniques in wearable sensing systems. This study provides valuable contributions to how multimodal sensor/data fusion, coupled with advanced ML methods, enhances hand gesture recognition accuracy, ultimately paving the way for more effective and adaptable wearable technology applications.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Classification algorithms</subject><subject>Convolutional neural networks</subject><subject>Data fusion</subject><subject>Data integration</subject><subject>Deep learning</subject><subject>deep learning (DL)</subject><subject>Discriminant analysis</subject><subject>Electromyography</subject><subject>Feature extraction</subject><subject>Gesture recognition</subject><subject>hand gesture recognition</subject><subject>human–machine interface (HMI)</subject><subject>limb position effect</subject><subject>Machine learning</subject><subject>multimodal sensing</subject><subject>Multisensor fusion</subject><subject>Neural networks</subject><subject>sensor fusion</subject><subject>Sensors</subject><subject>Synergistic effect</subject><subject>Training</subject><subject>Wearable technology</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNpNkc9OwzAMxisEEmPwAEgcInGBQ0cSt0t6HGV_QGMgxhC3Kl3SKaNrRtIO7X14UNptB0627M8_W_4875LgDiE4unua9icdimnQAeBR1IUjr0XCkPuEBfy4yQH7AbDPU-_MuSXGJGIha3m_I1FINFSurKxCb2puFoUutSlQb26Nc-hDWG0qh8Z6laJX43ZNh2ZOFwsk0HOVl3plpMjRVBW74nTrSrVC98IpiWrQVOWZ35NiXeqNQg-iFP6gcs2KZnVsio3Jq4ZaMyaqsrtQ_hj75dBNPJm423PvJBO5UxeH2PZmg_57PPLHL8PHuDf25zTolj7jacZCPoeuAKmIFAQ4pWkEgCnmPCChSgkJpQiCFAgFiSMagWSCU8EwzqDtXe-5a2u-q_onydJUtr7LJYC7AcMAhNQqslftHmRVlqytXgm7TQhOGjOSxoykMSM5mFHPXO1ntFLqnz6ssXX7D5mUhtM</recordid><startdate>20240601</startdate><enddate>20240601</enddate><creator>Zhang, Shen</creator><creator>Zhou, Hao</creator><creator>Tchantchane, Rayane</creator><creator>Alici, Gursel</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-9500-5098</orcidid><orcidid>https://orcid.org/0000-0002-3530-4747</orcidid><orcidid>https://orcid.org/0009-0008-8663-6451</orcidid><orcidid>https://orcid.org/0000-0001-6527-2881</orcidid></search><sort><creationdate>20240601</creationdate><title>Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs)</title><author>Zhang, Shen ; Zhou, Hao ; Tchantchane, Rayane ; Alici, Gursel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c246t-78bf758c36a3de1da13822b93302088415eb115da44b3123d09293d7a82a700f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Classification algorithms</topic><topic>Convolutional neural networks</topic><topic>Data fusion</topic><topic>Data integration</topic><topic>Deep learning</topic><topic>deep learning (DL)</topic><topic>Discriminant analysis</topic><topic>Electromyography</topic><topic>Feature extraction</topic><topic>Gesture recognition</topic><topic>hand gesture recognition</topic><topic>human–machine interface (HMI)</topic><topic>limb position effect</topic><topic>Machine learning</topic><topic>multimodal sensing</topic><topic>Multisensor fusion</topic><topic>Neural networks</topic><topic>sensor fusion</topic><topic>Sensors</topic><topic>Synergistic effect</topic><topic>Training</topic><topic>Wearable technology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Shen</creatorcontrib><creatorcontrib>Zhou, Hao</creatorcontrib><creatorcontrib>Tchantchane, Rayane</creatorcontrib><creatorcontrib>Alici, Gursel</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Electronics & Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Shen</au><au>Zhou, Hao</au><au>Tchantchane, Rayane</au><au>Alici, Gursel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs)</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2024-06-01</date><risdate>2024</risdate><volume>24</volume><issue>11</issue><spage>18633</spage><epage>18645</epage><pages>18633-18645</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>This study explores the challenge of hand gesture recognition across various limb positions using a new co-located multimodal armband system incorporating surface electromyography (sEMG) and pressure-based force myography (pFMG) sensors. Conventional machine learning (ML) algorithms and convolutional neural network models (CNNs) were evaluated for accurately recognizing hand gestures. A comprehensive investigation was conducted, encompassing feature-level and decision-level CNN models, alongside advanced fusion techniques to enhance the recognition performance. This research consistently demonstrates the superiority of CNN models, revealing their potential in extracting intricate patterns from raw multimodal sensor data. The study showcased significant accuracy improvements over single-modality approaches, emphasizing the synergistic effects of multimodal sensing. Notably, the CNN models achieved and 88.34% accuracy for self-adaptive decision-level fusion and 87.79% accuracy for feature-level fusion, outperforming the linear discriminant analysis (LDA) with 83.33% accuracy when considering all nine gestures. Furthermore, the study explores the relationship between the number of hand gestures and recognition accuracy, revealing consistently high accuracy levels ranging from 88% to 100% for 2-9 gestures and a remarkable 98% accuracy for the commonly used five gestures. This research underscores the adaptability of CNNs in effectively capturing the complex complementation between multimodal data and varying limb positions, advancing the field of gesture recognition, and emphasizing the potential of high-level data-fusion deep learning (DL) techniques in wearable sensing systems. This study provides valuable contributions to how multimodal sensor/data fusion, coupled with advanced ML methods, enhances hand gesture recognition accuracy, ultimately paving the way for more effective and adaptable wearable technology applications.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2024.3389963</doi><tpages>13</tpages><orcidid>https://orcid.org/0000-0001-9500-5098</orcidid><orcidid>https://orcid.org/0000-0002-3530-4747</orcidid><orcidid>https://orcid.org/0009-0008-8663-6451</orcidid><orcidid>https://orcid.org/0000-0001-6527-2881</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1530-437X |
ispartof | IEEE sensors journal, 2024-06, Vol.24 (11), p.18633-18645 |
issn | 1530-437X 1558-1748 |
language | eng |
recordid | cdi_ieee_primary_10506463 |
source | IEEE Electronic Library (IEL) |
subjects | Accuracy Algorithms Artificial neural networks Classification algorithms Convolutional neural networks Data fusion Data integration Deep learning deep learning (DL) Discriminant analysis Electromyography Feature extraction Gesture recognition hand gesture recognition human–machine interface (HMI) limb position effect Machine learning multimodal sensing Multisensor fusion Neural networks sensor fusion Sensors Synergistic effect Training Wearable technology |
title | Hand Gesture Recognition Across Various Limb Positions Using a Multimodal Sensing System Based on Self-Adaptive Data-Fusion and Convolutional Neural Networks (CNNs) |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T19%3A32%3A49IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Hand%20Gesture%20Recognition%20Across%20Various%20Limb%20Positions%20Using%20a%20Multimodal%20Sensing%20System%20Based%20on%20Self-Adaptive%20Data-Fusion%20and%20Convolutional%20Neural%20Networks%20(CNNs)&rft.jtitle=IEEE%20sensors%20journal&rft.au=Zhang,%20Shen&rft.date=2024-06-01&rft.volume=24&rft.issue=11&rft.spage=18633&rft.epage=18645&rft.pages=18633-18645&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2024.3389963&rft_dat=%3Cproquest_RIE%3E3064703311%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3064703311&rft_id=info:pmid/&rft_ieee_id=10506463&rfr_iscdi=true |