Shallow Graph Convolutional Network for Skeleton-Based Action Recognition

Graph convolutional networks (GCNs) have brought considerable improvement to the skeleton-based action recognition task. Existing GCN-based methods usually use the fixed spatial graph size among all the layers. It severely affects the model's abilities to exploit the global and semantic discrim...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Sensors (Basel, Switzerland) Switzerland), 2021-01, Vol.21 (2), p.452
Hauptverfasser: Yang, Wenjie, Zhang, Jianlin, Cai, Jingju, Xu, Zhiyong
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue 2
container_start_page 452
container_title Sensors (Basel, Switzerland)
container_volume 21
creator Yang, Wenjie
Zhang, Jianlin
Cai, Jingju
Xu, Zhiyong
description Graph convolutional networks (GCNs) have brought considerable improvement to the skeleton-based action recognition task. Existing GCN-based methods usually use the fixed spatial graph size among all the layers. It severely affects the model's abilities to exploit the global and semantic discriminative information due to the limits of receptive fields. Furthermore, the fixed graph size would cause many redundancies in the representation of actions, which is inefficient for the model. The redundancies could also hinder the model from focusing on beneficial features. To address those issues, we proposed a plug-and-play channel adaptive merging module (CAMM) specific for the human skeleton graph, which can merge the vertices from the same part of the skeleton graph adaptively and efficiently. The merge weights are different across the channels, so every channel has its flexibility to integrate the joints. Then, we build a novel shallow graph convolutional network (SGCN) based on the module, which achieves state-of-the-art performance with less computational cost. Experimental results on NTU-RGB+D and Kinetics-Skeleton illustrates the superiority of our methods.
doi_str_mv 10.3390/s21020452
format Article
fullrecord <record><control><sourceid>proquest_doaj_</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7827280</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><doaj_id>oai_doaj_org_article_6d0ca8ff06ef445cb8267981d6f5fc1a</doaj_id><sourcerecordid>2478200111</sourcerecordid><originalsourceid>FETCH-LOGICAL-c469t-a108f207c3adb04b91db838818e2b17200c5f862114a23267eab604bcbb5bfac3</originalsourceid><addsrcrecordid>eNpdkU1PGzEQhq2qqFDaA38ArdQLPSyMP3bXuVSiUaGREEjQnq2x1042OOvU3gXx7-sQGkFPHnkePZqZl5AjCqecT-AsMQoMRMXekQMqmCglY_D-Vb1PPqa0BGCcc_mB7HMuBDSyOiCzuwV6Hx6Ly4jrRTEN_UPw49CFHn1xbYfHEO8LF2Jxd2-9HUJffsdk2-LcbJji1pow77tN_YnsOfTJfn55D8nvix-_pj_Lq5vL2fT8qjSingwlUpCOQWM4thqEntBWSy4llZZp2jAAUzlZM0oFMs7qxqKuM2e0rrRDww_JbOttAy7VOnYrjE8qYKeeP0KcK4xDZ7xVdQsGpXNQWydEZbTMvomkbe0qZyhm17etaz3qlW2N7YeI_o30bafvFmoeHlQjWcMkZMHJiyCGP6NNg1p1yVjvsbdhTIqJJlNC0CajX_5Dl2GM-cxbKi9OKc3U1y1lYkgpWrcbhoLahK12YWf2-PX0O_JfuvwvsqKkAw</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2478200111</pqid></control><display><type>article</type><title>Shallow Graph Convolutional Network for Skeleton-Based Action Recognition</title><source>MEDLINE</source><source>DOAJ Directory of Open Access Journals</source><source>MDPI - Multidisciplinary Digital Publishing Institute</source><source>EZB-FREE-00999 freely available EZB journals</source><source>PubMed Central</source><source>Free Full-Text Journals in Chemistry</source><creator>Yang, Wenjie ; Zhang, Jianlin ; Cai, Jingju ; Xu, Zhiyong</creator><creatorcontrib>Yang, Wenjie ; Zhang, Jianlin ; Cai, Jingju ; Xu, Zhiyong</creatorcontrib><description>Graph convolutional networks (GCNs) have brought considerable improvement to the skeleton-based action recognition task. Existing GCN-based methods usually use the fixed spatial graph size among all the layers. It severely affects the model's abilities to exploit the global and semantic discriminative information due to the limits of receptive fields. Furthermore, the fixed graph size would cause many redundancies in the representation of actions, which is inefficient for the model. The redundancies could also hinder the model from focusing on beneficial features. To address those issues, we proposed a plug-and-play channel adaptive merging module (CAMM) specific for the human skeleton graph, which can merge the vertices from the same part of the skeleton graph adaptively and efficiently. The merge weights are different across the channels, so every channel has its flexibility to integrate the joints. Then, we build a novel shallow graph convolutional network (SGCN) based on the module, which achieves state-of-the-art performance with less computational cost. Experimental results on NTU-RGB+D and Kinetics-Skeleton illustrates the superiority of our methods.</description><identifier>ISSN: 1424-8220</identifier><identifier>EISSN: 1424-8220</identifier><identifier>DOI: 10.3390/s21020452</identifier><identifier>PMID: 33440785</identifier><language>eng</language><publisher>Switzerland: MDPI AG</publisher><subject>activity recognition ; Apexes ; Bones ; Datasets ; graph convolution network ; Graph theory ; Humans ; Methods ; Neural networks ; Neural Networks, Computer ; Pattern Recognition, Automated ; Recognition ; Semantics ; Skeleton ; skeleton sequence</subject><ispartof>Sensors (Basel, Switzerland), 2021-01, Vol.21 (2), p.452</ispartof><rights>2021. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>2021 by the authors. 2021</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c469t-a108f207c3adb04b91db838818e2b17200c5f862114a23267eab604bcbb5bfac3</citedby><cites>FETCH-LOGICAL-c469t-a108f207c3adb04b91db838818e2b17200c5f862114a23267eab604bcbb5bfac3</cites><orcidid>0000-0001-8658-610X ; 0000-0002-5284-2942</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7827280/pdf/$$EPDF$$P50$$Gpubmedcentral$$Hfree_for_read</linktopdf><linktohtml>$$Uhttps://www.ncbi.nlm.nih.gov/pmc/articles/PMC7827280/$$EHTML$$P50$$Gpubmedcentral$$Hfree_for_read</linktohtml><link.rule.ids>230,314,727,780,784,864,885,2102,27924,27925,53791,53793</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/33440785$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Wenjie</creatorcontrib><creatorcontrib>Zhang, Jianlin</creatorcontrib><creatorcontrib>Cai, Jingju</creatorcontrib><creatorcontrib>Xu, Zhiyong</creatorcontrib><title>Shallow Graph Convolutional Network for Skeleton-Based Action Recognition</title><title>Sensors (Basel, Switzerland)</title><addtitle>Sensors (Basel)</addtitle><description>Graph convolutional networks (GCNs) have brought considerable improvement to the skeleton-based action recognition task. Existing GCN-based methods usually use the fixed spatial graph size among all the layers. It severely affects the model's abilities to exploit the global and semantic discriminative information due to the limits of receptive fields. Furthermore, the fixed graph size would cause many redundancies in the representation of actions, which is inefficient for the model. The redundancies could also hinder the model from focusing on beneficial features. To address those issues, we proposed a plug-and-play channel adaptive merging module (CAMM) specific for the human skeleton graph, which can merge the vertices from the same part of the skeleton graph adaptively and efficiently. The merge weights are different across the channels, so every channel has its flexibility to integrate the joints. Then, we build a novel shallow graph convolutional network (SGCN) based on the module, which achieves state-of-the-art performance with less computational cost. Experimental results on NTU-RGB+D and Kinetics-Skeleton illustrates the superiority of our methods.</description><subject>activity recognition</subject><subject>Apexes</subject><subject>Bones</subject><subject>Datasets</subject><subject>graph convolution network</subject><subject>Graph theory</subject><subject>Humans</subject><subject>Methods</subject><subject>Neural networks</subject><subject>Neural Networks, Computer</subject><subject>Pattern Recognition, Automated</subject><subject>Recognition</subject><subject>Semantics</subject><subject>Skeleton</subject><subject>skeleton sequence</subject><issn>1424-8220</issn><issn>1424-8220</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>EIF</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>DOA</sourceid><recordid>eNpdkU1PGzEQhq2qqFDaA38ArdQLPSyMP3bXuVSiUaGREEjQnq2x1042OOvU3gXx7-sQGkFPHnkePZqZl5AjCqecT-AsMQoMRMXekQMqmCglY_D-Vb1PPqa0BGCcc_mB7HMuBDSyOiCzuwV6Hx6Ly4jrRTEN_UPw49CFHn1xbYfHEO8LF2Jxd2-9HUJffsdk2-LcbJji1pow77tN_YnsOfTJfn55D8nvix-_pj_Lq5vL2fT8qjSingwlUpCOQWM4thqEntBWSy4llZZp2jAAUzlZM0oFMs7qxqKuM2e0rrRDww_JbOttAy7VOnYrjE8qYKeeP0KcK4xDZ7xVdQsGpXNQWydEZbTMvomkbe0qZyhm17etaz3qlW2N7YeI_o30bafvFmoeHlQjWcMkZMHJiyCGP6NNg1p1yVjvsbdhTIqJJlNC0CajX_5Dl2GM-cxbKi9OKc3U1y1lYkgpWrcbhoLahK12YWf2-PX0O_JfuvwvsqKkAw</recordid><startdate>20210111</startdate><enddate>20210111</enddate><creator>Yang, Wenjie</creator><creator>Zhang, Jianlin</creator><creator>Cai, Jingju</creator><creator>Xu, Zhiyong</creator><general>MDPI AG</general><general>MDPI</general><scope>CGR</scope><scope>CUY</scope><scope>CVF</scope><scope>ECM</scope><scope>EIF</scope><scope>NPM</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7X7</scope><scope>7XB</scope><scope>88E</scope><scope>8FI</scope><scope>8FJ</scope><scope>8FK</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FYUFA</scope><scope>GHDGH</scope><scope>K9.</scope><scope>M0S</scope><scope>M1P</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>7X8</scope><scope>5PM</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0001-8658-610X</orcidid><orcidid>https://orcid.org/0000-0002-5284-2942</orcidid></search><sort><creationdate>20210111</creationdate><title>Shallow Graph Convolutional Network for Skeleton-Based Action Recognition</title><author>Yang, Wenjie ; Zhang, Jianlin ; Cai, Jingju ; Xu, Zhiyong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c469t-a108f207c3adb04b91db838818e2b17200c5f862114a23267eab604bcbb5bfac3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>activity recognition</topic><topic>Apexes</topic><topic>Bones</topic><topic>Datasets</topic><topic>graph convolution network</topic><topic>Graph theory</topic><topic>Humans</topic><topic>Methods</topic><topic>Neural networks</topic><topic>Neural Networks, Computer</topic><topic>Pattern Recognition, Automated</topic><topic>Recognition</topic><topic>Semantics</topic><topic>Skeleton</topic><topic>skeleton sequence</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Yang, Wenjie</creatorcontrib><creatorcontrib>Zhang, Jianlin</creatorcontrib><creatorcontrib>Cai, Jingju</creatorcontrib><creatorcontrib>Xu, Zhiyong</creatorcontrib><collection>Medline</collection><collection>MEDLINE</collection><collection>MEDLINE (Ovid)</collection><collection>MEDLINE</collection><collection>MEDLINE</collection><collection>PubMed</collection><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Health &amp; Medical Collection</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>Medical Database (Alumni Edition)</collection><collection>Hospital Premium Collection</collection><collection>Hospital Premium Collection (Alumni Edition)</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Health Research Premium Collection</collection><collection>Health Research Premium Collection (Alumni)</collection><collection>ProQuest Health &amp; Medical Complete (Alumni)</collection><collection>Health &amp; Medical Collection (Alumni Edition)</collection><collection>Medical Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>Sensors (Basel, Switzerland)</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yang, Wenjie</au><au>Zhang, Jianlin</au><au>Cai, Jingju</au><au>Xu, Zhiyong</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Shallow Graph Convolutional Network for Skeleton-Based Action Recognition</atitle><jtitle>Sensors (Basel, Switzerland)</jtitle><addtitle>Sensors (Basel)</addtitle><date>2021-01-11</date><risdate>2021</risdate><volume>21</volume><issue>2</issue><spage>452</spage><pages>452-</pages><issn>1424-8220</issn><eissn>1424-8220</eissn><abstract>Graph convolutional networks (GCNs) have brought considerable improvement to the skeleton-based action recognition task. Existing GCN-based methods usually use the fixed spatial graph size among all the layers. It severely affects the model's abilities to exploit the global and semantic discriminative information due to the limits of receptive fields. Furthermore, the fixed graph size would cause many redundancies in the representation of actions, which is inefficient for the model. The redundancies could also hinder the model from focusing on beneficial features. To address those issues, we proposed a plug-and-play channel adaptive merging module (CAMM) specific for the human skeleton graph, which can merge the vertices from the same part of the skeleton graph adaptively and efficiently. The merge weights are different across the channels, so every channel has its flexibility to integrate the joints. Then, we build a novel shallow graph convolutional network (SGCN) based on the module, which achieves state-of-the-art performance with less computational cost. Experimental results on NTU-RGB+D and Kinetics-Skeleton illustrates the superiority of our methods.</abstract><cop>Switzerland</cop><pub>MDPI AG</pub><pmid>33440785</pmid><doi>10.3390/s21020452</doi><orcidid>https://orcid.org/0000-0001-8658-610X</orcidid><orcidid>https://orcid.org/0000-0002-5284-2942</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 1424-8220
ispartof Sensors (Basel, Switzerland), 2021-01, Vol.21 (2), p.452
issn 1424-8220
1424-8220
language eng
recordid cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_7827280
source MEDLINE; DOAJ Directory of Open Access Journals; MDPI - Multidisciplinary Digital Publishing Institute; EZB-FREE-00999 freely available EZB journals; PubMed Central; Free Full-Text Journals in Chemistry
subjects activity recognition
Apexes
Bones
Datasets
graph convolution network
Graph theory
Humans
Methods
Neural networks
Neural Networks, Computer
Pattern Recognition, Automated
Recognition
Semantics
Skeleton
skeleton sequence
title Shallow Graph Convolutional Network for Skeleton-Based Action Recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T01%3A55%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_doaj_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Shallow%20Graph%20Convolutional%20Network%20for%20Skeleton-Based%20Action%20Recognition&rft.jtitle=Sensors%20(Basel,%20Switzerland)&rft.au=Yang,%20Wenjie&rft.date=2021-01-11&rft.volume=21&rft.issue=2&rft.spage=452&rft.pages=452-&rft.issn=1424-8220&rft.eissn=1424-8220&rft_id=info:doi/10.3390/s21020452&rft_dat=%3Cproquest_doaj_%3E2478200111%3C/proquest_doaj_%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2478200111&rft_id=info:pmid/33440785&rft_doaj_id=oai_doaj_org_article_6d0ca8ff06ef445cb8267981d6f5fc1a&rfr_iscdi=true