Smartphone-based gait recognition using convolutional neural networks and dual-tree complex wavelet transform
Gait recognition is an efficient way of identifying people from their walking behavior, using inertial sensors integrated into the smartphones. These inertial sensors such as accelerometers and gyroscopes easily collect the gait data used by the existing deep learning-based gait recognition methods....
Gespeichert in:
Veröffentlicht in: | Multimedia systems 2022-12, Vol.28 (6), p.2307-2317 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 2317 |
---|---|
container_issue | 6 |
container_start_page | 2307 |
container_title | Multimedia systems |
container_volume | 28 |
creator | Sezavar, Ahmadreza Atta, Randa Ghanbari, Mohammad |
description | Gait recognition is an efficient way of identifying people from their walking behavior, using inertial sensors integrated into the smartphones. These inertial sensors such as accelerometers and gyroscopes easily collect the gait data used by the existing deep learning-based gait recognition methods. Although these methods specifically, the hybrid deep neural networks, provide good gait feature representation, their recognition accuracy needs to be improved as well as reducing their computational cost. In this paper, a person identification framework from smartphone-acquired inertial gait signals is proposed to overcome these limitations. It is based on the combination of convolutional neural network (CNN) and dual-tree complex wavelet transform (DTCWT), named as CNN–DTCWT. In the proposed framework, global average pooling layer and DTCWT layer are integrated into the CNN to provide robust and highly accurate inertial gait feature representation. Experimental results demonstrate the superiority of the proposed structure over the state-of-the-art models. Tested on three data sets, it achieves higher recognition performance than the state-of-the-art CNN-based, LSTM-based models, and hybrid networks within average recognition accuracy improvements of 1.7–14.95% |
doi_str_mv | 10.1007/s00530-022-00954-2 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2739129802</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2739129802</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-4a4cb119f9cddd15e6dc73b4daf49dcabe601dfd6982718e585d620e4f2b5d4f3</originalsourceid><addsrcrecordid>eNp9kEtLAzEUhYMoWKt_wFXAdfQmk3lkKcUXFFyo65CZ3KlTp0lNMq3-e6et4M7Vgct3DpePkEsO1xygvIkAeQYMhGAAKpdMHJEJl5lgvKrEMZmAkoJJVYhTchbjEoCXRQYTsnpZmZDW794hq01ESxemSzRg4xeuS513dIidW9DGu43vh93F9NThEPaRtj58RGqcpXYwPUsBcWRX6x6_6NZssMdEUzAutj6szslJa_qIF785JW_3d6-zRzZ_fnia3c5Zk3GVmDSyqTlXrWqstTzHwjZlVktrWqlsY2osgNvWFqoSJa8wr3JbCEDZijq3ss2m5Oqwuw7-c8CY9NIPYXw8alFmigtVgRgpcaCa4GMM2Op16EYd35qD3mnVB6161Kr3WvWulB1KcYTdAsPf9D-tH4UCfjo</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2739129802</pqid></control><display><type>article</type><title>Smartphone-based gait recognition using convolutional neural networks and dual-tree complex wavelet transform</title><source>SpringerLink Journals - AutoHoldings</source><creator>Sezavar, Ahmadreza ; Atta, Randa ; Ghanbari, Mohammad</creator><creatorcontrib>Sezavar, Ahmadreza ; Atta, Randa ; Ghanbari, Mohammad ; IEEE Life Fellow</creatorcontrib><description>Gait recognition is an efficient way of identifying people from their walking behavior, using inertial sensors integrated into the smartphones. These inertial sensors such as accelerometers and gyroscopes easily collect the gait data used by the existing deep learning-based gait recognition methods. Although these methods specifically, the hybrid deep neural networks, provide good gait feature representation, their recognition accuracy needs to be improved as well as reducing their computational cost. In this paper, a person identification framework from smartphone-acquired inertial gait signals is proposed to overcome these limitations. It is based on the combination of convolutional neural network (CNN) and dual-tree complex wavelet transform (DTCWT), named as CNN–DTCWT. In the proposed framework, global average pooling layer and DTCWT layer are integrated into the CNN to provide robust and highly accurate inertial gait feature representation. Experimental results demonstrate the superiority of the proposed structure over the state-of-the-art models. Tested on three data sets, it achieves higher recognition performance than the state-of-the-art CNN-based, LSTM-based models, and hybrid networks within average recognition accuracy improvements of 1.7–14.95%</description><identifier>ISSN: 0942-4962</identifier><identifier>EISSN: 1432-1882</identifier><identifier>DOI: 10.1007/s00530-022-00954-2</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Accelerometers ; Accuracy ; Artificial neural networks ; Computer Communication Networks ; Computer Graphics ; Computer Science ; Cryptology ; Data Storage Representation ; Feature recognition ; Gait recognition ; Inertial sensing devices ; Machine learning ; Multimedia Information Systems ; Neural networks ; Operating Systems ; Regular Article ; Representations ; Smartphones ; Wavelet transforms</subject><ispartof>Multimedia systems, 2022-12, Vol.28 (6), p.2307-2317</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-4a4cb119f9cddd15e6dc73b4daf49dcabe601dfd6982718e585d620e4f2b5d4f3</citedby><cites>FETCH-LOGICAL-c319t-4a4cb119f9cddd15e6dc73b4daf49dcabe601dfd6982718e585d620e4f2b5d4f3</cites><orcidid>0000-0001-8294-7780 ; 0000-0002-5482-8378</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s00530-022-00954-2$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s00530-022-00954-2$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27923,27924,41487,42556,51318</link.rule.ids></links><search><creatorcontrib>Sezavar, Ahmadreza</creatorcontrib><creatorcontrib>Atta, Randa</creatorcontrib><creatorcontrib>Ghanbari, Mohammad</creatorcontrib><creatorcontrib>IEEE Life Fellow</creatorcontrib><title>Smartphone-based gait recognition using convolutional neural networks and dual-tree complex wavelet transform</title><title>Multimedia systems</title><addtitle>Multimedia Systems</addtitle><description>Gait recognition is an efficient way of identifying people from their walking behavior, using inertial sensors integrated into the smartphones. These inertial sensors such as accelerometers and gyroscopes easily collect the gait data used by the existing deep learning-based gait recognition methods. Although these methods specifically, the hybrid deep neural networks, provide good gait feature representation, their recognition accuracy needs to be improved as well as reducing their computational cost. In this paper, a person identification framework from smartphone-acquired inertial gait signals is proposed to overcome these limitations. It is based on the combination of convolutional neural network (CNN) and dual-tree complex wavelet transform (DTCWT), named as CNN–DTCWT. In the proposed framework, global average pooling layer and DTCWT layer are integrated into the CNN to provide robust and highly accurate inertial gait feature representation. Experimental results demonstrate the superiority of the proposed structure over the state-of-the-art models. Tested on three data sets, it achieves higher recognition performance than the state-of-the-art CNN-based, LSTM-based models, and hybrid networks within average recognition accuracy improvements of 1.7–14.95%</description><subject>Accelerometers</subject><subject>Accuracy</subject><subject>Artificial neural networks</subject><subject>Computer Communication Networks</subject><subject>Computer Graphics</subject><subject>Computer Science</subject><subject>Cryptology</subject><subject>Data Storage Representation</subject><subject>Feature recognition</subject><subject>Gait recognition</subject><subject>Inertial sensing devices</subject><subject>Machine learning</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Operating Systems</subject><subject>Regular Article</subject><subject>Representations</subject><subject>Smartphones</subject><subject>Wavelet transforms</subject><issn>0942-4962</issn><issn>1432-1882</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kEtLAzEUhYMoWKt_wFXAdfQmk3lkKcUXFFyo65CZ3KlTp0lNMq3-e6et4M7Vgct3DpePkEsO1xygvIkAeQYMhGAAKpdMHJEJl5lgvKrEMZmAkoJJVYhTchbjEoCXRQYTsnpZmZDW794hq01ESxemSzRg4xeuS513dIidW9DGu43vh93F9NThEPaRtj58RGqcpXYwPUsBcWRX6x6_6NZssMdEUzAutj6szslJa_qIF785JW_3d6-zRzZ_fnia3c5Zk3GVmDSyqTlXrWqstTzHwjZlVktrWqlsY2osgNvWFqoSJa8wr3JbCEDZijq3ss2m5Oqwuw7-c8CY9NIPYXw8alFmigtVgRgpcaCa4GMM2Op16EYd35qD3mnVB6161Kr3WvWulB1KcYTdAsPf9D-tH4UCfjo</recordid><startdate>20221201</startdate><enddate>20221201</enddate><creator>Sezavar, Ahmadreza</creator><creator>Atta, Randa</creator><creator>Ghanbari, Mohammad</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-8294-7780</orcidid><orcidid>https://orcid.org/0000-0002-5482-8378</orcidid></search><sort><creationdate>20221201</creationdate><title>Smartphone-based gait recognition using convolutional neural networks and dual-tree complex wavelet transform</title><author>Sezavar, Ahmadreza ; Atta, Randa ; Ghanbari, Mohammad</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-4a4cb119f9cddd15e6dc73b4daf49dcabe601dfd6982718e585d620e4f2b5d4f3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Accelerometers</topic><topic>Accuracy</topic><topic>Artificial neural networks</topic><topic>Computer Communication Networks</topic><topic>Computer Graphics</topic><topic>Computer Science</topic><topic>Cryptology</topic><topic>Data Storage Representation</topic><topic>Feature recognition</topic><topic>Gait recognition</topic><topic>Inertial sensing devices</topic><topic>Machine learning</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Operating Systems</topic><topic>Regular Article</topic><topic>Representations</topic><topic>Smartphones</topic><topic>Wavelet transforms</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sezavar, Ahmadreza</creatorcontrib><creatorcontrib>Atta, Randa</creatorcontrib><creatorcontrib>Ghanbari, Mohammad</creatorcontrib><creatorcontrib>IEEE Life Fellow</creatorcontrib><collection>CrossRef</collection><jtitle>Multimedia systems</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sezavar, Ahmadreza</au><au>Atta, Randa</au><au>Ghanbari, Mohammad</au><aucorp>IEEE Life Fellow</aucorp><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Smartphone-based gait recognition using convolutional neural networks and dual-tree complex wavelet transform</atitle><jtitle>Multimedia systems</jtitle><stitle>Multimedia Systems</stitle><date>2022-12-01</date><risdate>2022</risdate><volume>28</volume><issue>6</issue><spage>2307</spage><epage>2317</epage><pages>2307-2317</pages><issn>0942-4962</issn><eissn>1432-1882</eissn><abstract>Gait recognition is an efficient way of identifying people from their walking behavior, using inertial sensors integrated into the smartphones. These inertial sensors such as accelerometers and gyroscopes easily collect the gait data used by the existing deep learning-based gait recognition methods. Although these methods specifically, the hybrid deep neural networks, provide good gait feature representation, their recognition accuracy needs to be improved as well as reducing their computational cost. In this paper, a person identification framework from smartphone-acquired inertial gait signals is proposed to overcome these limitations. It is based on the combination of convolutional neural network (CNN) and dual-tree complex wavelet transform (DTCWT), named as CNN–DTCWT. In the proposed framework, global average pooling layer and DTCWT layer are integrated into the CNN to provide robust and highly accurate inertial gait feature representation. Experimental results demonstrate the superiority of the proposed structure over the state-of-the-art models. Tested on three data sets, it achieves higher recognition performance than the state-of-the-art CNN-based, LSTM-based models, and hybrid networks within average recognition accuracy improvements of 1.7–14.95%</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s00530-022-00954-2</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-8294-7780</orcidid><orcidid>https://orcid.org/0000-0002-5482-8378</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0942-4962 |
ispartof | Multimedia systems, 2022-12, Vol.28 (6), p.2307-2317 |
issn | 0942-4962 1432-1882 |
language | eng |
recordid | cdi_proquest_journals_2739129802 |
source | SpringerLink Journals - AutoHoldings |
subjects | Accelerometers Accuracy Artificial neural networks Computer Communication Networks Computer Graphics Computer Science Cryptology Data Storage Representation Feature recognition Gait recognition Inertial sensing devices Machine learning Multimedia Information Systems Neural networks Operating Systems Regular Article Representations Smartphones Wavelet transforms |
title | Smartphone-based gait recognition using convolutional neural networks and dual-tree complex wavelet transform |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T19%3A52%3A24IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Smartphone-based%20gait%20recognition%20using%20convolutional%20neural%20networks%20and%20dual-tree%20complex%20wavelet%20transform&rft.jtitle=Multimedia%20systems&rft.au=Sezavar,%20Ahmadreza&rft.aucorp=IEEE%20Life%20Fellow&rft.date=2022-12-01&rft.volume=28&rft.issue=6&rft.spage=2307&rft.epage=2317&rft.pages=2307-2317&rft.issn=0942-4962&rft.eissn=1432-1882&rft_id=info:doi/10.1007/s00530-022-00954-2&rft_dat=%3Cproquest_cross%3E2739129802%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2739129802&rft_id=info:pmid/&rfr_iscdi=true |