Adaptive most joint selection and covariance descriptions for a robust skeleton-based human action recognition

In this paper, we propose two effective manners of utilizing skeleton data for human action recognition (HAR). The proposed method on one hand takes advantage of the skeleton data thanks to their robustness to human appearance change as well as the real-time performance. On the other hand, it avoids...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Multimedia tools and applications 2021-07, Vol.80 (18), p.27757-27783
Hauptverfasser: Nguyen, Van-Toi, Nguyen, Tien-Nam, Le, Thi-Lan, Pham, Dinh-Tan, Vu, Hai
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 27783
container_issue 18
container_start_page 27757
container_title Multimedia tools and applications
container_volume 80
creator Nguyen, Van-Toi
Nguyen, Tien-Nam
Le, Thi-Lan
Pham, Dinh-Tan
Vu, Hai
description In this paper, we propose two effective manners of utilizing skeleton data for human action recognition (HAR). The proposed method on one hand takes advantage of the skeleton data thanks to their robustness to human appearance change as well as the real-time performance. On the other hand, it avoids inherent drawbacks of the skeleton data such as noises, incorrect human skeleton estimation due to self-occlusion of human pose. To this end, in terms of feature designing, we propose to extract covariance descriptors from joint velocity and combine them with those of joint position. In terms of 3-D skeleton-based activity representation, we propose two schemes to select the most informative joints. The proposed method is evaluated on two benchmark datasets. On the MSRAction-3D dataset, the proposed method outperformed different hand-designed features-based methods. On the challenging dataset CMDFall, the proposed method significantly improves accuracy when compared with techniques based on recent neuronal networks. Finally, we investigate the robustness of the proposed method via a cross-dataset evaluation.
doi_str_mv 10.1007/s11042-021-10866-4
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2554662244</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2554662244</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-738188a6907750d8b96b97070308c6ffddc07ee49e491456999f1bb8bc15b5b73</originalsourceid><addsrcrecordid>eNp9kE1LxDAQhoMouK7-AU8Bz9Ekbb6Oy-IXLHjRc0jSdO26m9SkXfDfm1rBmzAwA_M-7wwvANcE3xKMxV0mBNcUYUoQwZJzVJ-ABWGiQkJQclrmSmIkGCbn4CLnHcaEM1ovQFg1ph-6o4eHmAe4i10YYPZ774YuBmhCA108mtSZ4DxsfHap66dVhm1M0MAU7VjA_FGYIQZkTfYNfB8PptCzSfIubkM3zZfgrDX77K9--xK8Pdy_rp_Q5uXxeb3aIFcRNSBRSSKl4QqL8nMjreJWCSxwhaXjbds0Dgvva1WK1IwrpVpirbSOMMusqJbgZvbtU_wcfR70Lo4plJOaMlZzTmldFxWdVS7FnJNvdZ-6g0lfmmA95arnXHXJVf_kqieomqFcxGHr05_1P9Q3_yJ8RQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2554662244</pqid></control><display><type>article</type><title>Adaptive most joint selection and covariance descriptions for a robust skeleton-based human action recognition</title><source>Springer Nature - Complete Springer Journals</source><creator>Nguyen, Van-Toi ; Nguyen, Tien-Nam ; Le, Thi-Lan ; Pham, Dinh-Tan ; Vu, Hai</creator><creatorcontrib>Nguyen, Van-Toi ; Nguyen, Tien-Nam ; Le, Thi-Lan ; Pham, Dinh-Tan ; Vu, Hai</creatorcontrib><description>In this paper, we propose two effective manners of utilizing skeleton data for human action recognition (HAR). The proposed method on one hand takes advantage of the skeleton data thanks to their robustness to human appearance change as well as the real-time performance. On the other hand, it avoids inherent drawbacks of the skeleton data such as noises, incorrect human skeleton estimation due to self-occlusion of human pose. To this end, in terms of feature designing, we propose to extract covariance descriptors from joint velocity and combine them with those of joint position. In terms of 3-D skeleton-based activity representation, we propose two schemes to select the most informative joints. The proposed method is evaluated on two benchmark datasets. On the MSRAction-3D dataset, the proposed method outperformed different hand-designed features-based methods. On the challenging dataset CMDFall, the proposed method significantly improves accuracy when compared with techniques based on recent neuronal networks. Finally, we investigate the robustness of the proposed method via a cross-dataset evaluation.</description><identifier>ISSN: 1380-7501</identifier><identifier>EISSN: 1573-7721</identifier><identifier>DOI: 10.1007/s11042-021-10866-4</identifier><language>eng</language><publisher>New York: Springer US</publisher><subject>Computer Communication Networks ; Computer Science ; Covariance ; Data Structures and Information Theory ; Datasets ; Feature extraction ; Human activity recognition ; Human motion ; Joints (anatomy) ; Multimedia Information Systems ; Neural networks ; Occlusion ; Robustness ; Special Purpose and Application-Based Systems</subject><ispartof>Multimedia tools and applications, 2021-07, Vol.80 (18), p.27757-27783</ispartof><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-738188a6907750d8b96b97070308c6ffddc07ee49e491456999f1bb8bc15b5b73</citedby><cites>FETCH-LOGICAL-c319t-738188a6907750d8b96b97070308c6ffddc07ee49e491456999f1bb8bc15b5b73</cites><orcidid>0000-0001-9541-3905</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11042-021-10866-4$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11042-021-10866-4$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27903,27904,41467,42536,51297</link.rule.ids></links><search><creatorcontrib>Nguyen, Van-Toi</creatorcontrib><creatorcontrib>Nguyen, Tien-Nam</creatorcontrib><creatorcontrib>Le, Thi-Lan</creatorcontrib><creatorcontrib>Pham, Dinh-Tan</creatorcontrib><creatorcontrib>Vu, Hai</creatorcontrib><title>Adaptive most joint selection and covariance descriptions for a robust skeleton-based human action recognition</title><title>Multimedia tools and applications</title><addtitle>Multimed Tools Appl</addtitle><description>In this paper, we propose two effective manners of utilizing skeleton data for human action recognition (HAR). The proposed method on one hand takes advantage of the skeleton data thanks to their robustness to human appearance change as well as the real-time performance. On the other hand, it avoids inherent drawbacks of the skeleton data such as noises, incorrect human skeleton estimation due to self-occlusion of human pose. To this end, in terms of feature designing, we propose to extract covariance descriptors from joint velocity and combine them with those of joint position. In terms of 3-D skeleton-based activity representation, we propose two schemes to select the most informative joints. The proposed method is evaluated on two benchmark datasets. On the MSRAction-3D dataset, the proposed method outperformed different hand-designed features-based methods. On the challenging dataset CMDFall, the proposed method significantly improves accuracy when compared with techniques based on recent neuronal networks. Finally, we investigate the robustness of the proposed method via a cross-dataset evaluation.</description><subject>Computer Communication Networks</subject><subject>Computer Science</subject><subject>Covariance</subject><subject>Data Structures and Information Theory</subject><subject>Datasets</subject><subject>Feature extraction</subject><subject>Human activity recognition</subject><subject>Human motion</subject><subject>Joints (anatomy)</subject><subject>Multimedia Information Systems</subject><subject>Neural networks</subject><subject>Occlusion</subject><subject>Robustness</subject><subject>Special Purpose and Application-Based Systems</subject><issn>1380-7501</issn><issn>1573-7721</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>8G5</sourceid><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><sourceid>GUQSH</sourceid><sourceid>M2O</sourceid><recordid>eNp9kE1LxDAQhoMouK7-AU8Bz9Ekbb6Oy-IXLHjRc0jSdO26m9SkXfDfm1rBmzAwA_M-7wwvANcE3xKMxV0mBNcUYUoQwZJzVJ-ABWGiQkJQclrmSmIkGCbn4CLnHcaEM1ovQFg1ph-6o4eHmAe4i10YYPZ774YuBmhCA108mtSZ4DxsfHap66dVhm1M0MAU7VjA_FGYIQZkTfYNfB8PptCzSfIubkM3zZfgrDX77K9--xK8Pdy_rp_Q5uXxeb3aIFcRNSBRSSKl4QqL8nMjreJWCSxwhaXjbds0Dgvva1WK1IwrpVpirbSOMMusqJbgZvbtU_wcfR70Lo4plJOaMlZzTmldFxWdVS7FnJNvdZ-6g0lfmmA95arnXHXJVf_kqieomqFcxGHr05_1P9Q3_yJ8RQ</recordid><startdate>20210701</startdate><enddate>20210701</enddate><creator>Nguyen, Van-Toi</creator><creator>Nguyen, Tien-Nam</creator><creator>Le, Thi-Lan</creator><creator>Pham, Dinh-Tan</creator><creator>Vu, Hai</creator><general>Springer US</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>3V.</scope><scope>7SC</scope><scope>7WY</scope><scope>7WZ</scope><scope>7XB</scope><scope>87Z</scope><scope>8AL</scope><scope>8AO</scope><scope>8FD</scope><scope>8FE</scope><scope>8FG</scope><scope>8FK</scope><scope>8FL</scope><scope>8G5</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BEZIV</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>FRNLG</scope><scope>F~G</scope><scope>GNUQQ</scope><scope>GUQSH</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K60</scope><scope>K6~</scope><scope>K7-</scope><scope>L.-</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>M0C</scope><scope>M0N</scope><scope>M2O</scope><scope>MBDVC</scope><scope>P5Z</scope><scope>P62</scope><scope>PQBIZ</scope><scope>PQBZA</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>Q9U</scope><orcidid>https://orcid.org/0000-0001-9541-3905</orcidid></search><sort><creationdate>20210701</creationdate><title>Adaptive most joint selection and covariance descriptions for a robust skeleton-based human action recognition</title><author>Nguyen, Van-Toi ; Nguyen, Tien-Nam ; Le, Thi-Lan ; Pham, Dinh-Tan ; Vu, Hai</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-738188a6907750d8b96b97070308c6ffddc07ee49e491456999f1bb8bc15b5b73</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Communication Networks</topic><topic>Computer Science</topic><topic>Covariance</topic><topic>Data Structures and Information Theory</topic><topic>Datasets</topic><topic>Feature extraction</topic><topic>Human activity recognition</topic><topic>Human motion</topic><topic>Joints (anatomy)</topic><topic>Multimedia Information Systems</topic><topic>Neural networks</topic><topic>Occlusion</topic><topic>Robustness</topic><topic>Special Purpose and Application-Based Systems</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Nguyen, Van-Toi</creatorcontrib><creatorcontrib>Nguyen, Tien-Nam</creatorcontrib><creatorcontrib>Le, Thi-Lan</creatorcontrib><creatorcontrib>Pham, Dinh-Tan</creatorcontrib><creatorcontrib>Vu, Hai</creatorcontrib><collection>CrossRef</collection><collection>ProQuest Central (Corporate)</collection><collection>Computer and Information Systems Abstracts</collection><collection>ABI/INFORM Collection</collection><collection>ABI/INFORM Global (PDF only)</collection><collection>ProQuest Central (purchase pre-March 2016)</collection><collection>ABI/INFORM Global (Alumni Edition)</collection><collection>Computing Database (Alumni Edition)</collection><collection>ProQuest Pharma Collection</collection><collection>Technology Research Database</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central (Alumni) (purchase pre-March 2016)</collection><collection>ABI/INFORM Collection (Alumni Edition)</collection><collection>Research Library (Alumni Edition)</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Business Premium Collection</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>Business Premium Collection (Alumni)</collection><collection>ABI/INFORM Global (Corporate)</collection><collection>ProQuest Central Student</collection><collection>Research Library Prep</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>ProQuest Business Collection (Alumni Edition)</collection><collection>ProQuest Business Collection</collection><collection>Computer Science Database</collection><collection>ABI/INFORM Professional Advanced</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>ABI/INFORM Global</collection><collection>Computing Database</collection><collection>Research Library</collection><collection>Research Library (Corporate)</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Business</collection><collection>ProQuest One Business (Alumni)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central Basic</collection><jtitle>Multimedia tools and applications</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Nguyen, Van-Toi</au><au>Nguyen, Tien-Nam</au><au>Le, Thi-Lan</au><au>Pham, Dinh-Tan</au><au>Vu, Hai</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adaptive most joint selection and covariance descriptions for a robust skeleton-based human action recognition</atitle><jtitle>Multimedia tools and applications</jtitle><stitle>Multimed Tools Appl</stitle><date>2021-07-01</date><risdate>2021</risdate><volume>80</volume><issue>18</issue><spage>27757</spage><epage>27783</epage><pages>27757-27783</pages><issn>1380-7501</issn><eissn>1573-7721</eissn><abstract>In this paper, we propose two effective manners of utilizing skeleton data for human action recognition (HAR). The proposed method on one hand takes advantage of the skeleton data thanks to their robustness to human appearance change as well as the real-time performance. On the other hand, it avoids inherent drawbacks of the skeleton data such as noises, incorrect human skeleton estimation due to self-occlusion of human pose. To this end, in terms of feature designing, we propose to extract covariance descriptors from joint velocity and combine them with those of joint position. In terms of 3-D skeleton-based activity representation, we propose two schemes to select the most informative joints. The proposed method is evaluated on two benchmark datasets. On the MSRAction-3D dataset, the proposed method outperformed different hand-designed features-based methods. On the challenging dataset CMDFall, the proposed method significantly improves accuracy when compared with techniques based on recent neuronal networks. Finally, we investigate the robustness of the proposed method via a cross-dataset evaluation.</abstract><cop>New York</cop><pub>Springer US</pub><doi>10.1007/s11042-021-10866-4</doi><tpages>27</tpages><orcidid>https://orcid.org/0000-0001-9541-3905</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 1380-7501
ispartof Multimedia tools and applications, 2021-07, Vol.80 (18), p.27757-27783
issn 1380-7501
1573-7721
language eng
recordid cdi_proquest_journals_2554662244
source Springer Nature - Complete Springer Journals
subjects Computer Communication Networks
Computer Science
Covariance
Data Structures and Information Theory
Datasets
Feature extraction
Human activity recognition
Human motion
Joints (anatomy)
Multimedia Information Systems
Neural networks
Occlusion
Robustness
Special Purpose and Application-Based Systems
title Adaptive most joint selection and covariance descriptions for a robust skeleton-based human action recognition
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T14%3A07%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adaptive%20most%20joint%20selection%20and%20covariance%20descriptions%20for%20a%20robust%20skeleton-based%20human%20action%20recognition&rft.jtitle=Multimedia%20tools%20and%20applications&rft.au=Nguyen,%20Van-Toi&rft.date=2021-07-01&rft.volume=80&rft.issue=18&rft.spage=27757&rft.epage=27783&rft.pages=27757-27783&rft.issn=1380-7501&rft.eissn=1573-7721&rft_id=info:doi/10.1007/s11042-021-10866-4&rft_dat=%3Cproquest_cross%3E2554662244%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2554662244&rft_id=info:pmid/&rfr_iscdi=true