View-independent gait events detection using CNN-transformer hybrid network

Accurate gait detection is crucial in utilizing the ample health information embedded in it. Vision-based approaches for gait detection have emerged as an alternative to the exacting sensor-based approaches, but their application has been rather limited due to complicated feature engineering process...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of biomedical informatics 2023-11, Vol.147, p.104524-104524, Article 104524
Hauptverfasser: Jamsrandorj, Ankhzaya, Jung, Dawoon, Kumar, Konki Sravan, Arshad, Muhammad Zeeshan, Lim, Hwasup, Kim, Jinwook, Mun, Kyung-Ryoul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 104524
container_issue
container_start_page 104524
container_title Journal of biomedical informatics
container_volume 147
creator Jamsrandorj, Ankhzaya
Jung, Dawoon
Kumar, Konki Sravan
Arshad, Muhammad Zeeshan
Lim, Hwasup
Kim, Jinwook
Mun, Kyung-Ryoul
description Accurate gait detection is crucial in utilizing the ample health information embedded in it. Vision-based approaches for gait detection have emerged as an alternative to the exacting sensor-based approaches, but their application has been rather limited due to complicated feature engineering processes and heavy reliance on lateral views. Thus, this study aimed to find a simple vision-based approach that is view-independent and accurate. A total of 22 participants performed six different actions representing standard and peculiar gaits, and the videos acquired from these actions were used as the input of the deep learning networks. Four networks, including a 2D convolutional neural network and an attention-based deep learning network, were trained with standard gaits, and their detection performance for both standard and peculiar gaits was assessed using measures including F1-scores. While all networks achieved remarkable detection performance, the CNN-Transformer network achieved the best performance for both standard and peculiar gaits. Little deviation by the speed of actions or view angles was found. The study is expected to contribute to the wider application of vision-based approaches in gait detection and gait-based health monitoring both at home and in clinical settings. [Display omitted] •A reliable vision-based approach for gait event detection is proposed.•The proposed method achieved outstanding performance in detecting both standard and peculiar gaits.•The gait events were accurately detected regardless of camera positions or angles and walking speed conditions.•The proposed approach could offer significant practical benefits and convenience.
doi_str_mv 10.1016/j.jbi.2023.104524
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_miscellaneous_2877390823</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S1532046423002459</els_id><sourcerecordid>2877390823</sourcerecordid><originalsourceid>FETCH-LOGICAL-c325t-38553340ff36b19ad90b4c95f5733bd81cb1ad32b3b149cadfa24bfc2082f0593</originalsourceid><addsrcrecordid>eNp9kEtPwzAQhCMEEqXwA7jlyCXFzzQRJ1TxElW5AFfLj3VxaJ1iu6367zEK4shld1aaWWm-orjEaIIRrq-7SafchCBC8804YUfFCHNKKsQadPyna3ZanMXYIYQx5_WoeH53sK-cN7CBPHwql9KlEnZZxtJAAp1c78ttdH5ZzhaLKgXpo-3DGkL5cVDBmdJD2vfh87w4sXIV4eJ3j4u3-7vX2WM1f3l4mt3OK00JTxVtOKeUIWtprXArTYsU0y23fEqpMg3WCktDiaIKs1ZLYyVhymqCGmIRb-m4uBr-bkL_tYWYxNpFDauV9NBvoyDNdErb7KbZigerDn2MAazYBLeW4SAwEj_gRCcyOPEDTgzgcuZmyEDusHMQRNQOvAbjQqYhTO_-SX8DbON2Jw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2877390823</pqid></control><display><type>article</type><title>View-independent gait events detection using CNN-transformer hybrid network</title><source>Elsevier ScienceDirect Journals Complete - AutoHoldings</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Jamsrandorj, Ankhzaya ; Jung, Dawoon ; Kumar, Konki Sravan ; Arshad, Muhammad Zeeshan ; Lim, Hwasup ; Kim, Jinwook ; Mun, Kyung-Ryoul</creator><creatorcontrib>Jamsrandorj, Ankhzaya ; Jung, Dawoon ; Kumar, Konki Sravan ; Arshad, Muhammad Zeeshan ; Lim, Hwasup ; Kim, Jinwook ; Mun, Kyung-Ryoul</creatorcontrib><description>Accurate gait detection is crucial in utilizing the ample health information embedded in it. Vision-based approaches for gait detection have emerged as an alternative to the exacting sensor-based approaches, but their application has been rather limited due to complicated feature engineering processes and heavy reliance on lateral views. Thus, this study aimed to find a simple vision-based approach that is view-independent and accurate. A total of 22 participants performed six different actions representing standard and peculiar gaits, and the videos acquired from these actions were used as the input of the deep learning networks. Four networks, including a 2D convolutional neural network and an attention-based deep learning network, were trained with standard gaits, and their detection performance for both standard and peculiar gaits was assessed using measures including F1-scores. While all networks achieved remarkable detection performance, the CNN-Transformer network achieved the best performance for both standard and peculiar gaits. Little deviation by the speed of actions or view angles was found. The study is expected to contribute to the wider application of vision-based approaches in gait detection and gait-based health monitoring both at home and in clinical settings. [Display omitted] •A reliable vision-based approach for gait event detection is proposed.•The proposed method achieved outstanding performance in detecting both standard and peculiar gaits.•The gait events were accurately detected regardless of camera positions or angles and walking speed conditions.•The proposed approach could offer significant practical benefits and convenience.</description><identifier>ISSN: 1532-0464</identifier><identifier>EISSN: 1532-0480</identifier><identifier>DOI: 10.1016/j.jbi.2023.104524</identifier><language>eng</language><publisher>Elsevier Inc</publisher><subject>Attention-based network ; Convolutional neural network ; Deep learning ; Gait events detection ; View-independent method</subject><ispartof>Journal of biomedical informatics, 2023-11, Vol.147, p.104524-104524, Article 104524</ispartof><rights>2023 The Author(s)</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c325t-38553340ff36b19ad90b4c95f5733bd81cb1ad32b3b149cadfa24bfc2082f0593</cites><orcidid>0000-0001-5951-1014 ; 0000-0003-3612-7196 ; 0000-0003-2957-668X ; 0000-0003-4397-270X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://dx.doi.org/10.1016/j.jbi.2023.104524$$EHTML$$P50$$Gelsevier$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,3548,27922,27923,45993</link.rule.ids></links><search><creatorcontrib>Jamsrandorj, Ankhzaya</creatorcontrib><creatorcontrib>Jung, Dawoon</creatorcontrib><creatorcontrib>Kumar, Konki Sravan</creatorcontrib><creatorcontrib>Arshad, Muhammad Zeeshan</creatorcontrib><creatorcontrib>Lim, Hwasup</creatorcontrib><creatorcontrib>Kim, Jinwook</creatorcontrib><creatorcontrib>Mun, Kyung-Ryoul</creatorcontrib><title>View-independent gait events detection using CNN-transformer hybrid network</title><title>Journal of biomedical informatics</title><description>Accurate gait detection is crucial in utilizing the ample health information embedded in it. Vision-based approaches for gait detection have emerged as an alternative to the exacting sensor-based approaches, but their application has been rather limited due to complicated feature engineering processes and heavy reliance on lateral views. Thus, this study aimed to find a simple vision-based approach that is view-independent and accurate. A total of 22 participants performed six different actions representing standard and peculiar gaits, and the videos acquired from these actions were used as the input of the deep learning networks. Four networks, including a 2D convolutional neural network and an attention-based deep learning network, were trained with standard gaits, and their detection performance for both standard and peculiar gaits was assessed using measures including F1-scores. While all networks achieved remarkable detection performance, the CNN-Transformer network achieved the best performance for both standard and peculiar gaits. Little deviation by the speed of actions or view angles was found. The study is expected to contribute to the wider application of vision-based approaches in gait detection and gait-based health monitoring both at home and in clinical settings. [Display omitted] •A reliable vision-based approach for gait event detection is proposed.•The proposed method achieved outstanding performance in detecting both standard and peculiar gaits.•The gait events were accurately detected regardless of camera positions or angles and walking speed conditions.•The proposed approach could offer significant practical benefits and convenience.</description><subject>Attention-based network</subject><subject>Convolutional neural network</subject><subject>Deep learning</subject><subject>Gait events detection</subject><subject>View-independent method</subject><issn>1532-0464</issn><issn>1532-0480</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNp9kEtPwzAQhCMEEqXwA7jlyCXFzzQRJ1TxElW5AFfLj3VxaJ1iu6367zEK4shld1aaWWm-orjEaIIRrq-7SafchCBC8804YUfFCHNKKsQadPyna3ZanMXYIYQx5_WoeH53sK-cN7CBPHwql9KlEnZZxtJAAp1c78ttdH5ZzhaLKgXpo-3DGkL5cVDBmdJD2vfh87w4sXIV4eJ3j4u3-7vX2WM1f3l4mt3OK00JTxVtOKeUIWtprXArTYsU0y23fEqpMg3WCktDiaIKs1ZLYyVhymqCGmIRb-m4uBr-bkL_tYWYxNpFDauV9NBvoyDNdErb7KbZigerDn2MAazYBLeW4SAwEj_gRCcyOPEDTgzgcuZmyEDusHMQRNQOvAbjQqYhTO_-SX8DbON2Jw</recordid><startdate>202311</startdate><enddate>202311</enddate><creator>Jamsrandorj, Ankhzaya</creator><creator>Jung, Dawoon</creator><creator>Kumar, Konki Sravan</creator><creator>Arshad, Muhammad Zeeshan</creator><creator>Lim, Hwasup</creator><creator>Kim, Jinwook</creator><creator>Mun, Kyung-Ryoul</creator><general>Elsevier Inc</general><scope>6I.</scope><scope>AAFTH</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7X8</scope><orcidid>https://orcid.org/0000-0001-5951-1014</orcidid><orcidid>https://orcid.org/0000-0003-3612-7196</orcidid><orcidid>https://orcid.org/0000-0003-2957-668X</orcidid><orcidid>https://orcid.org/0000-0003-4397-270X</orcidid></search><sort><creationdate>202311</creationdate><title>View-independent gait events detection using CNN-transformer hybrid network</title><author>Jamsrandorj, Ankhzaya ; Jung, Dawoon ; Kumar, Konki Sravan ; Arshad, Muhammad Zeeshan ; Lim, Hwasup ; Kim, Jinwook ; Mun, Kyung-Ryoul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c325t-38553340ff36b19ad90b4c95f5733bd81cb1ad32b3b149cadfa24bfc2082f0593</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Attention-based network</topic><topic>Convolutional neural network</topic><topic>Deep learning</topic><topic>Gait events detection</topic><topic>View-independent method</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jamsrandorj, Ankhzaya</creatorcontrib><creatorcontrib>Jung, Dawoon</creatorcontrib><creatorcontrib>Kumar, Konki Sravan</creatorcontrib><creatorcontrib>Arshad, Muhammad Zeeshan</creatorcontrib><creatorcontrib>Lim, Hwasup</creatorcontrib><creatorcontrib>Kim, Jinwook</creatorcontrib><creatorcontrib>Mun, Kyung-Ryoul</creatorcontrib><collection>ScienceDirect Open Access Titles</collection><collection>Elsevier:ScienceDirect:Open Access</collection><collection>CrossRef</collection><collection>MEDLINE - Academic</collection><jtitle>Journal of biomedical informatics</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jamsrandorj, Ankhzaya</au><au>Jung, Dawoon</au><au>Kumar, Konki Sravan</au><au>Arshad, Muhammad Zeeshan</au><au>Lim, Hwasup</au><au>Kim, Jinwook</au><au>Mun, Kyung-Ryoul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>View-independent gait events detection using CNN-transformer hybrid network</atitle><jtitle>Journal of biomedical informatics</jtitle><date>2023-11</date><risdate>2023</risdate><volume>147</volume><spage>104524</spage><epage>104524</epage><pages>104524-104524</pages><artnum>104524</artnum><issn>1532-0464</issn><eissn>1532-0480</eissn><abstract>Accurate gait detection is crucial in utilizing the ample health information embedded in it. Vision-based approaches for gait detection have emerged as an alternative to the exacting sensor-based approaches, but their application has been rather limited due to complicated feature engineering processes and heavy reliance on lateral views. Thus, this study aimed to find a simple vision-based approach that is view-independent and accurate. A total of 22 participants performed six different actions representing standard and peculiar gaits, and the videos acquired from these actions were used as the input of the deep learning networks. Four networks, including a 2D convolutional neural network and an attention-based deep learning network, were trained with standard gaits, and their detection performance for both standard and peculiar gaits was assessed using measures including F1-scores. While all networks achieved remarkable detection performance, the CNN-Transformer network achieved the best performance for both standard and peculiar gaits. Little deviation by the speed of actions or view angles was found. The study is expected to contribute to the wider application of vision-based approaches in gait detection and gait-based health monitoring both at home and in clinical settings. [Display omitted] •A reliable vision-based approach for gait event detection is proposed.•The proposed method achieved outstanding performance in detecting both standard and peculiar gaits.•The gait events were accurately detected regardless of camera positions or angles and walking speed conditions.•The proposed approach could offer significant practical benefits and convenience.</abstract><pub>Elsevier Inc</pub><doi>10.1016/j.jbi.2023.104524</doi><tpages>1</tpages><orcidid>https://orcid.org/0000-0001-5951-1014</orcidid><orcidid>https://orcid.org/0000-0003-3612-7196</orcidid><orcidid>https://orcid.org/0000-0003-2957-668X</orcidid><orcidid>https://orcid.org/0000-0003-4397-270X</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1532-0464
ispartof Journal of biomedical informatics, 2023-11, Vol.147, p.104524-104524, Article 104524
issn 1532-0464
1532-0480
language eng
recordid cdi_proquest_miscellaneous_2877390823
source Elsevier ScienceDirect Journals Complete - AutoHoldings; EZB-FREE-00999 freely available EZB journals
subjects Attention-based network
Convolutional neural network
Deep learning
Gait events detection
View-independent method
title View-independent gait events detection using CNN-transformer hybrid network
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T09%3A31%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=View-independent%20gait%20events%20detection%20using%20CNN-transformer%20hybrid%20network&rft.jtitle=Journal%20of%20biomedical%20informatics&rft.au=Jamsrandorj,%20Ankhzaya&rft.date=2023-11&rft.volume=147&rft.spage=104524&rft.epage=104524&rft.pages=104524-104524&rft.artnum=104524&rft.issn=1532-0464&rft.eissn=1532-0480&rft_id=info:doi/10.1016/j.jbi.2023.104524&rft_dat=%3Cproquest_cross%3E2877390823%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2877390823&rft_id=info:pmid/&rft_els_id=S1532046423002459&rfr_iscdi=true