Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition
Gait recognition can be used in person identification and re-identification by itself or in conjunction with other biometrics. Although gait has both spatial and temporal attributes, and it has been observed that decoupling spatial feature and temporal feature can better exploit the gait feature on...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2022-10, Vol.32 (10), p.6967-6980 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 6980 |
---|---|
container_issue | 10 |
container_start_page | 6967 |
container_title | IEEE transactions on circuits and systems for video technology |
container_volume | 32 |
creator | Huang, Tianhuan Ben, Xianye Gong, Chen Zhang, Baochang Yan, Rui Wu, Qiang |
description | Gait recognition can be used in person identification and re-identification by itself or in conjunction with other biometrics. Although gait has both spatial and temporal attributes, and it has been observed that decoupling spatial feature and temporal feature can better exploit the gait feature on the fine-grained level. However, the spatial-temporal correlations of gait video signals are also lost in the decoupling process. Direct 3D convolution approaches can retain such correlations, but they also introduce unnecessary interferences. Instead of common 3D convolution solutions, this paper proposes an integration of decoupling process into a 3D convolution framework for cross-view gait recognition. In particular, a novel block consisting of a Parallel-insight Convolution layer integrated with a Spatial-Temporal Dual-Attention (STDA) unit is proposed as the basic block for global spatial-temporal information extraction. Under the guidance of the STDA unit, this block can well integrate spatial-temporal information extracted by two decoupled models and at the same time retain the spatial-temporal correlations. In addition, a Multi-Scale Salient Feature Extractor is proposed to further exploit the fine-grained features through context awareness extension of part-based features and adaptively aggregating the spatial features. Extensive experiments on three popular gait datasets, namely CASIA-B, OULP and OUMVLP, demonstrate that the proposed method outperforms state-of-the-art methods. |
doi_str_mv | 10.1109/TCSVT.2022.3175959 |
format | Article |
fullrecord | <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_proquest_journals_2721428017</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9777726</ieee_id><sourcerecordid>2721428017</sourcerecordid><originalsourceid>FETCH-LOGICAL-c295t-62516c2af16cf41eb2bb86d729962def9748beeea7bd1015edfd6db322c1eb263</originalsourceid><addsrcrecordid>eNo9UFFLwzAQDqLgnP4BfSn43Jm7NU3zqGWbwkBwda8hbVLN6Jqadoj_3tQN7-Hu4L7vvo-PkFugMwAqHop8sy1mSBFnc-BMMHFGJsBYFiNSdh52yiDOENgluer7HaWQZAmfkKdF-6nayuho06nBqiYuzL5zXjXRRjXWhFNUOx_l3vV9vLXmO1opO0RvpnIfrR2sa6_JRa2a3tyc5pS8LxdF_hyvX1cv-eM6rlCwIU6RQVqhqkOvEzAllmWWao5CpKhNLXiSlcYYxUsNFJjRtU51OUesRnA6n5L749_Ou6-D6Qe5cwffBkmJHCHBjAIPKDyiqtGxN7XsvN0r_yOByjEr-ZeVHLOSp6wC6e5IssHAP0HwUEH4F3w2ZZw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2721428017</pqid></control><display><type>article</type><title>Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition</title><source>IEEE Electronic Library (IEL)</source><creator>Huang, Tianhuan ; Ben, Xianye ; Gong, Chen ; Zhang, Baochang ; Yan, Rui ; Wu, Qiang</creator><creatorcontrib>Huang, Tianhuan ; Ben, Xianye ; Gong, Chen ; Zhang, Baochang ; Yan, Rui ; Wu, Qiang</creatorcontrib><description>Gait recognition can be used in person identification and re-identification by itself or in conjunction with other biometrics. Although gait has both spatial and temporal attributes, and it has been observed that decoupling spatial feature and temporal feature can better exploit the gait feature on the fine-grained level. However, the spatial-temporal correlations of gait video signals are also lost in the decoupling process. Direct 3D convolution approaches can retain such correlations, but they also introduce unnecessary interferences. Instead of common 3D convolution solutions, this paper proposes an integration of decoupling process into a 3D convolution framework for cross-view gait recognition. In particular, a novel block consisting of a Parallel-insight Convolution layer integrated with a Spatial-Temporal Dual-Attention (STDA) unit is proposed as the basic block for global spatial-temporal information extraction. Under the guidance of the STDA unit, this block can well integrate spatial-temporal information extracted by two decoupled models and at the same time retain the spatial-temporal correlations. In addition, a Multi-Scale Salient Feature Extractor is proposed to further exploit the fine-grained features through context awareness extension of part-based features and adaptively aggregating the spatial features. Extensive experiments on three popular gait datasets, namely CASIA-B, OULP and OUMVLP, demonstrate that the proposed method outperforms state-of-the-art methods.</description><identifier>ISSN: 1051-8215</identifier><identifier>EISSN: 1558-2205</identifier><identifier>DOI: 10.1109/TCSVT.2022.3175959</identifier><identifier>CODEN: ITCTEM</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Biometrics ; Convolution ; Correlation ; cross view ; Data mining ; Decoupling ; Feature extraction ; Gait recognition ; Information retrieval ; multi-scale salient feature extraction ; Signal processing ; Solid modeling ; spatial-temporal enhance ; Three-dimensional displays ; Video signals</subject><ispartof>IEEE transactions on circuits and systems for video technology, 2022-10, Vol.32 (10), p.6967-6980</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c295t-62516c2af16cf41eb2bb86d729962def9748beeea7bd1015edfd6db322c1eb263</citedby><cites>FETCH-LOGICAL-c295t-62516c2af16cf41eb2bb86d729962def9748beeea7bd1015edfd6db322c1eb263</cites><orcidid>0000-0001-7396-6218 ; 0000-0001-8083-3501 ; 0000-0002-4092-9856 ; 0000-0001-5641-2483 ; 0000-0002-9516-9583</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9777726$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9777726$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Huang, Tianhuan</creatorcontrib><creatorcontrib>Ben, Xianye</creatorcontrib><creatorcontrib>Gong, Chen</creatorcontrib><creatorcontrib>Zhang, Baochang</creatorcontrib><creatorcontrib>Yan, Rui</creatorcontrib><creatorcontrib>Wu, Qiang</creatorcontrib><title>Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition</title><title>IEEE transactions on circuits and systems for video technology</title><addtitle>TCSVT</addtitle><description>Gait recognition can be used in person identification and re-identification by itself or in conjunction with other biometrics. Although gait has both spatial and temporal attributes, and it has been observed that decoupling spatial feature and temporal feature can better exploit the gait feature on the fine-grained level. However, the spatial-temporal correlations of gait video signals are also lost in the decoupling process. Direct 3D convolution approaches can retain such correlations, but they also introduce unnecessary interferences. Instead of common 3D convolution solutions, this paper proposes an integration of decoupling process into a 3D convolution framework for cross-view gait recognition. In particular, a novel block consisting of a Parallel-insight Convolution layer integrated with a Spatial-Temporal Dual-Attention (STDA) unit is proposed as the basic block for global spatial-temporal information extraction. Under the guidance of the STDA unit, this block can well integrate spatial-temporal information extracted by two decoupled models and at the same time retain the spatial-temporal correlations. In addition, a Multi-Scale Salient Feature Extractor is proposed to further exploit the fine-grained features through context awareness extension of part-based features and adaptively aggregating the spatial features. Extensive experiments on three popular gait datasets, namely CASIA-B, OULP and OUMVLP, demonstrate that the proposed method outperforms state-of-the-art methods.</description><subject>Biometrics</subject><subject>Convolution</subject><subject>Correlation</subject><subject>cross view</subject><subject>Data mining</subject><subject>Decoupling</subject><subject>Feature extraction</subject><subject>Gait recognition</subject><subject>Information retrieval</subject><subject>multi-scale salient feature extraction</subject><subject>Signal processing</subject><subject>Solid modeling</subject><subject>spatial-temporal enhance</subject><subject>Three-dimensional displays</subject><subject>Video signals</subject><issn>1051-8215</issn><issn>1558-2205</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9UFFLwzAQDqLgnP4BfSn43Jm7NU3zqGWbwkBwda8hbVLN6Jqadoj_3tQN7-Hu4L7vvo-PkFugMwAqHop8sy1mSBFnc-BMMHFGJsBYFiNSdh52yiDOENgluer7HaWQZAmfkKdF-6nayuho06nBqiYuzL5zXjXRRjXWhFNUOx_l3vV9vLXmO1opO0RvpnIfrR2sa6_JRa2a3tyc5pS8LxdF_hyvX1cv-eM6rlCwIU6RQVqhqkOvEzAllmWWao5CpKhNLXiSlcYYxUsNFJjRtU51OUesRnA6n5L749_Ou6-D6Qe5cwffBkmJHCHBjAIPKDyiqtGxN7XsvN0r_yOByjEr-ZeVHLOSp6wC6e5IssHAP0HwUEH4F3w2ZZw</recordid><startdate>20221001</startdate><enddate>20221001</enddate><creator>Huang, Tianhuan</creator><creator>Ben, Xianye</creator><creator>Gong, Chen</creator><creator>Zhang, Baochang</creator><creator>Yan, Rui</creator><creator>Wu, Qiang</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><orcidid>https://orcid.org/0000-0001-7396-6218</orcidid><orcidid>https://orcid.org/0000-0001-8083-3501</orcidid><orcidid>https://orcid.org/0000-0002-4092-9856</orcidid><orcidid>https://orcid.org/0000-0001-5641-2483</orcidid><orcidid>https://orcid.org/0000-0002-9516-9583</orcidid></search><sort><creationdate>20221001</creationdate><title>Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition</title><author>Huang, Tianhuan ; Ben, Xianye ; Gong, Chen ; Zhang, Baochang ; Yan, Rui ; Wu, Qiang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c295t-62516c2af16cf41eb2bb86d729962def9748beeea7bd1015edfd6db322c1eb263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Biometrics</topic><topic>Convolution</topic><topic>Correlation</topic><topic>cross view</topic><topic>Data mining</topic><topic>Decoupling</topic><topic>Feature extraction</topic><topic>Gait recognition</topic><topic>Information retrieval</topic><topic>multi-scale salient feature extraction</topic><topic>Signal processing</topic><topic>Solid modeling</topic><topic>spatial-temporal enhance</topic><topic>Three-dimensional displays</topic><topic>Video signals</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Huang, Tianhuan</creatorcontrib><creatorcontrib>Ben, Xianye</creatorcontrib><creatorcontrib>Gong, Chen</creatorcontrib><creatorcontrib>Zhang, Baochang</creatorcontrib><creatorcontrib>Yan, Rui</creatorcontrib><creatorcontrib>Wu, Qiang</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>IEEE transactions on circuits and systems for video technology</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Tianhuan</au><au>Ben, Xianye</au><au>Gong, Chen</au><au>Zhang, Baochang</au><au>Yan, Rui</au><au>Wu, Qiang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition</atitle><jtitle>IEEE transactions on circuits and systems for video technology</jtitle><stitle>TCSVT</stitle><date>2022-10-01</date><risdate>2022</risdate><volume>32</volume><issue>10</issue><spage>6967</spage><epage>6980</epage><pages>6967-6980</pages><issn>1051-8215</issn><eissn>1558-2205</eissn><coden>ITCTEM</coden><abstract>Gait recognition can be used in person identification and re-identification by itself or in conjunction with other biometrics. Although gait has both spatial and temporal attributes, and it has been observed that decoupling spatial feature and temporal feature can better exploit the gait feature on the fine-grained level. However, the spatial-temporal correlations of gait video signals are also lost in the decoupling process. Direct 3D convolution approaches can retain such correlations, but they also introduce unnecessary interferences. Instead of common 3D convolution solutions, this paper proposes an integration of decoupling process into a 3D convolution framework for cross-view gait recognition. In particular, a novel block consisting of a Parallel-insight Convolution layer integrated with a Spatial-Temporal Dual-Attention (STDA) unit is proposed as the basic block for global spatial-temporal information extraction. Under the guidance of the STDA unit, this block can well integrate spatial-temporal information extracted by two decoupled models and at the same time retain the spatial-temporal correlations. In addition, a Multi-Scale Salient Feature Extractor is proposed to further exploit the fine-grained features through context awareness extension of part-based features and adaptively aggregating the spatial features. Extensive experiments on three popular gait datasets, namely CASIA-B, OULP and OUMVLP, demonstrate that the proposed method outperforms state-of-the-art methods.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TCSVT.2022.3175959</doi><tpages>14</tpages><orcidid>https://orcid.org/0000-0001-7396-6218</orcidid><orcidid>https://orcid.org/0000-0001-8083-3501</orcidid><orcidid>https://orcid.org/0000-0002-4092-9856</orcidid><orcidid>https://orcid.org/0000-0001-5641-2483</orcidid><orcidid>https://orcid.org/0000-0002-9516-9583</orcidid></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | ISSN: 1051-8215 |
ispartof | IEEE transactions on circuits and systems for video technology, 2022-10, Vol.32 (10), p.6967-6980 |
issn | 1051-8215 1558-2205 |
language | eng |
recordid | cdi_proquest_journals_2721428017 |
source | IEEE Electronic Library (IEL) |
subjects | Biometrics Convolution Correlation cross view Data mining Decoupling Feature extraction Gait recognition Information retrieval multi-scale salient feature extraction Signal processing Solid modeling spatial-temporal enhance Three-dimensional displays Video signals |
title | Enhanced Spatial-Temporal Salience for Cross-View Gait Recognition |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T11%3A16%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhanced%20Spatial-Temporal%20Salience%20for%20Cross-View%20Gait%20Recognition&rft.jtitle=IEEE%20transactions%20on%20circuits%20and%20systems%20for%20video%20technology&rft.au=Huang,%20Tianhuan&rft.date=2022-10-01&rft.volume=32&rft.issue=10&rft.spage=6967&rft.epage=6980&rft.pages=6967-6980&rft.issn=1051-8215&rft.eissn=1558-2205&rft.coden=ITCTEM&rft_id=info:doi/10.1109/TCSVT.2022.3175959&rft_dat=%3Cproquest_RIE%3E2721428017%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2721428017&rft_id=info:pmid/&rft_ieee_id=9777726&rfr_iscdi=true |