Action recognition algorithm based on skeletal joint data and adaptive time pyramid
Human action recognition technology plays an crucial role in the fields of video surveillance, video retrieval, sports medicine and human–computer interaction. Slow research and application of this technology limited to complex environments and plasticity of human action. As a new sensor, Kinect pro...
Gespeichert in:
Veröffentlicht in: | Signal, image and video processing image and video processing, 2022, Vol.16 (6), p.1615-1622 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 1622 |
---|---|
container_issue | 6 |
container_start_page | 1615 |
container_title | Signal, image and video processing |
container_volume | 16 |
creator | Sima, Mingjun Hou, Mingzheng Zhang, Xin Ding, Jianwei Feng, Ziliang |
description | Human action recognition technology plays an crucial role in the fields of video surveillance, video retrieval, sports medicine and human–computer interaction. Slow research and application of this technology limited to complex environments and plasticity of human action. As a new sensor, Kinect provides a new idea for human action recognition, which can synchronously obtain data of skeleton joint points from target. In this paper, we propose a human action recognition method using skeletal joints data. The motion and static information of human action are firstly fused as feature and skeletal vector is used to construct motion model which can describe variation of human action after feature extraction. Then the model is introduced into adaptive time pyramid to capture global and local information; furthermore, skeletal joints feature in each period of time is processed. Finally, kernel extreme learning machine is used for human action recognition. Experimental results show that our work successfully achieves skeleton information in comparison with other methods. |
doi_str_mv | 10.1007/s11760-021-02116-9 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2696496813</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2696496813</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-83bb36fb8e971536ffd0f8e17529f19a393407dac8cfaea1bf59bc6462e3d1263</originalsourceid><addsrcrecordid>eNp9kE1LAzEQhoMoWGr_gKeA59XMps0mx1L8goIH9RxmN0lN3Y-apEL_vbErenNgmJfhfWfgIeQS2DUwVt1EgEqwgpXw3SAKdUImIAUvoAI4_dWMn5NZjFuWi5eVFHJCnpdN8kNPg22GTe-PGtvNEHx662iN0RqaV_HdtjZhS7eD7xM1mJBibyga3CX_aWnynaW7Q8DOmwty5rCNdvYzp-T17vZl9VCsn-4fV8t10XBQqZC8rrlwtbSqgkVWzjAnLVSLUjlQyBWfs8pgIxuHFqF2C1U3Yi5Kyw2Ugk_J1Xh3F4aPvY1Jb4d96PNLXQol5kpI4NlVjq4mDDEG6_Qu-A7DQQPT3_z0yE9ndvrIT6sc4mMoZnO_seHv9D-pLwnmcy0</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2696496813</pqid></control><display><type>article</type><title>Action recognition algorithm based on skeletal joint data and adaptive time pyramid</title><source>SpringerLink Journals</source><creator>Sima, Mingjun ; Hou, Mingzheng ; Zhang, Xin ; Ding, Jianwei ; Feng, Ziliang</creator><creatorcontrib>Sima, Mingjun ; Hou, Mingzheng ; Zhang, Xin ; Ding, Jianwei ; Feng, Ziliang</creatorcontrib><description>Human action recognition technology plays an crucial role in the fields of video surveillance, video retrieval, sports medicine and human–computer interaction. Slow research and application of this technology limited to complex environments and plasticity of human action. As a new sensor, Kinect provides a new idea for human action recognition, which can synchronously obtain data of skeleton joint points from target. In this paper, we propose a human action recognition method using skeletal joints data. The motion and static information of human action are firstly fused as feature and skeletal vector is used to construct motion model which can describe variation of human action after feature extraction. Then the model is introduced into adaptive time pyramid to capture global and local information; furthermore, skeletal joints feature in each period of time is processed. Finally, kernel extreme learning machine is used for human action recognition. Experimental results show that our work successfully achieves skeleton information in comparison with other methods.</description><identifier>ISSN: 1863-1703</identifier><identifier>EISSN: 1863-1711</identifier><identifier>DOI: 10.1007/s11760-021-02116-9</identifier><language>eng</language><publisher>London: Springer London</publisher><subject>Algorithms ; Artificial neural networks ; Computer Imaging ; Computer Science ; Feature extraction ; Human activity recognition ; Human motion ; Image Processing and Computer Vision ; Joints (anatomy) ; Machine learning ; Multimedia Information Systems ; Original Paper ; Pattern Recognition and Graphics ; Signal,Image and Speech Processing ; Sports medicine ; Vision</subject><ispartof>Signal, image and video processing, 2022, Vol.16 (6), p.1615-1622</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022</rights><rights>The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2022.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-83bb36fb8e971536ffd0f8e17529f19a393407dac8cfaea1bf59bc6462e3d1263</citedby><cites>FETCH-LOGICAL-c319t-83bb36fb8e971536ffd0f8e17529f19a393407dac8cfaea1bf59bc6462e3d1263</cites><orcidid>0000-0002-5930-5031</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11760-021-02116-9$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11760-021-02116-9$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,776,780,27901,27902,41464,42533,51294</link.rule.ids></links><search><creatorcontrib>Sima, Mingjun</creatorcontrib><creatorcontrib>Hou, Mingzheng</creatorcontrib><creatorcontrib>Zhang, Xin</creatorcontrib><creatorcontrib>Ding, Jianwei</creatorcontrib><creatorcontrib>Feng, Ziliang</creatorcontrib><title>Action recognition algorithm based on skeletal joint data and adaptive time pyramid</title><title>Signal, image and video processing</title><addtitle>SIViP</addtitle><description>Human action recognition technology plays an crucial role in the fields of video surveillance, video retrieval, sports medicine and human–computer interaction. Slow research and application of this technology limited to complex environments and plasticity of human action. As a new sensor, Kinect provides a new idea for human action recognition, which can synchronously obtain data of skeleton joint points from target. In this paper, we propose a human action recognition method using skeletal joints data. The motion and static information of human action are firstly fused as feature and skeletal vector is used to construct motion model which can describe variation of human action after feature extraction. Then the model is introduced into adaptive time pyramid to capture global and local information; furthermore, skeletal joints feature in each period of time is processed. Finally, kernel extreme learning machine is used for human action recognition. Experimental results show that our work successfully achieves skeleton information in comparison with other methods.</description><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Computer Imaging</subject><subject>Computer Science</subject><subject>Feature extraction</subject><subject>Human activity recognition</subject><subject>Human motion</subject><subject>Image Processing and Computer Vision</subject><subject>Joints (anatomy)</subject><subject>Machine learning</subject><subject>Multimedia Information Systems</subject><subject>Original Paper</subject><subject>Pattern Recognition and Graphics</subject><subject>Signal,Image and Speech Processing</subject><subject>Sports medicine</subject><subject>Vision</subject><issn>1863-1703</issn><issn>1863-1711</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><recordid>eNp9kE1LAzEQhoMoWGr_gKeA59XMps0mx1L8goIH9RxmN0lN3Y-apEL_vbErenNgmJfhfWfgIeQS2DUwVt1EgEqwgpXw3SAKdUImIAUvoAI4_dWMn5NZjFuWi5eVFHJCnpdN8kNPg22GTe-PGtvNEHx662iN0RqaV_HdtjZhS7eD7xM1mJBibyga3CX_aWnynaW7Q8DOmwty5rCNdvYzp-T17vZl9VCsn-4fV8t10XBQqZC8rrlwtbSqgkVWzjAnLVSLUjlQyBWfs8pgIxuHFqF2C1U3Yi5Kyw2Ugk_J1Xh3F4aPvY1Jb4d96PNLXQol5kpI4NlVjq4mDDEG6_Qu-A7DQQPT3_z0yE9ndvrIT6sc4mMoZnO_seHv9D-pLwnmcy0</recordid><startdate>2022</startdate><enddate>2022</enddate><creator>Sima, Mingjun</creator><creator>Hou, Mingzheng</creator><creator>Zhang, Xin</creator><creator>Ding, Jianwei</creator><creator>Feng, Ziliang</creator><general>Springer London</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0002-5930-5031</orcidid></search><sort><creationdate>2022</creationdate><title>Action recognition algorithm based on skeletal joint data and adaptive time pyramid</title><author>Sima, Mingjun ; Hou, Mingzheng ; Zhang, Xin ; Ding, Jianwei ; Feng, Ziliang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-83bb36fb8e971536ffd0f8e17529f19a393407dac8cfaea1bf59bc6462e3d1263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Computer Imaging</topic><topic>Computer Science</topic><topic>Feature extraction</topic><topic>Human activity recognition</topic><topic>Human motion</topic><topic>Image Processing and Computer Vision</topic><topic>Joints (anatomy)</topic><topic>Machine learning</topic><topic>Multimedia Information Systems</topic><topic>Original Paper</topic><topic>Pattern Recognition and Graphics</topic><topic>Signal,Image and Speech Processing</topic><topic>Sports medicine</topic><topic>Vision</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Sima, Mingjun</creatorcontrib><creatorcontrib>Hou, Mingzheng</creatorcontrib><creatorcontrib>Zhang, Xin</creatorcontrib><creatorcontrib>Ding, Jianwei</creatorcontrib><creatorcontrib>Feng, Ziliang</creatorcontrib><collection>CrossRef</collection><jtitle>Signal, image and video processing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Sima, Mingjun</au><au>Hou, Mingzheng</au><au>Zhang, Xin</au><au>Ding, Jianwei</au><au>Feng, Ziliang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Action recognition algorithm based on skeletal joint data and adaptive time pyramid</atitle><jtitle>Signal, image and video processing</jtitle><stitle>SIViP</stitle><date>2022</date><risdate>2022</risdate><volume>16</volume><issue>6</issue><spage>1615</spage><epage>1622</epage><pages>1615-1622</pages><issn>1863-1703</issn><eissn>1863-1711</eissn><abstract>Human action recognition technology plays an crucial role in the fields of video surveillance, video retrieval, sports medicine and human–computer interaction. Slow research and application of this technology limited to complex environments and plasticity of human action. As a new sensor, Kinect provides a new idea for human action recognition, which can synchronously obtain data of skeleton joint points from target. In this paper, we propose a human action recognition method using skeletal joints data. The motion and static information of human action are firstly fused as feature and skeletal vector is used to construct motion model which can describe variation of human action after feature extraction. Then the model is introduced into adaptive time pyramid to capture global and local information; furthermore, skeletal joints feature in each period of time is processed. Finally, kernel extreme learning machine is used for human action recognition. Experimental results show that our work successfully achieves skeleton information in comparison with other methods.</abstract><cop>London</cop><pub>Springer London</pub><doi>10.1007/s11760-021-02116-9</doi><tpages>8</tpages><orcidid>https://orcid.org/0000-0002-5930-5031</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1863-1703 |
ispartof | Signal, image and video processing, 2022, Vol.16 (6), p.1615-1622 |
issn | 1863-1703 1863-1711 |
language | eng |
recordid | cdi_proquest_journals_2696496813 |
source | SpringerLink Journals |
subjects | Algorithms Artificial neural networks Computer Imaging Computer Science Feature extraction Human activity recognition Human motion Image Processing and Computer Vision Joints (anatomy) Machine learning Multimedia Information Systems Original Paper Pattern Recognition and Graphics Signal,Image and Speech Processing Sports medicine Vision |
title | Action recognition algorithm based on skeletal joint data and adaptive time pyramid |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-28T15%3A10%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Action%20recognition%20algorithm%20based%20on%20skeletal%20joint%20data%20and%20adaptive%20time%20pyramid&rft.jtitle=Signal,%20image%20and%20video%20processing&rft.au=Sima,%20Mingjun&rft.date=2022&rft.volume=16&rft.issue=6&rft.spage=1615&rft.epage=1622&rft.pages=1615-1622&rft.issn=1863-1703&rft.eissn=1863-1711&rft_id=info:doi/10.1007/s11760-021-02116-9&rft_dat=%3Cproquest_cross%3E2696496813%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2696496813&rft_id=info:pmid/&rfr_iscdi=true |