Action recognition using multi-directional projected depth motion maps

Camera-based action recognition plays a key role in diverse computer vision applications such as human computer interaction. This paper proposes a new action recognition approach using multi-directional projected depth motion map based motion descriptors. First, for the input depth video sequence, a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Journal of ambient intelligence and humanized computing 2023-11, Vol.14 (11), p.14767-14773
Hauptverfasser: Satyamurthi, Sowndarya, Tian, Jing, Chua, Matthew Chin Heng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 14773
container_issue 11
container_start_page 14767
container_title Journal of ambient intelligence and humanized computing
container_volume 14
creator Satyamurthi, Sowndarya
Tian, Jing
Chua, Matthew Chin Heng
description Camera-based action recognition plays a key role in diverse computer vision applications such as human computer interaction. This paper proposes a new action recognition approach using multi-directional projected depth motion map based motion descriptors. First, for the input depth video sequence, all the depth frames in the video are projected onto multiple planes to form the projected images. The absolute difference between two consecutive projected images is accumulated through the entire depth video for establishing maps from multiple views. Then, the local motion consistency of the map is examined to form a histogram of local binary patterns, which are then concatenated and further incorporated into a kernel-based extreme learning machine for action recognition. In contrast to that only three directions are used to calculated the projected depth images for motion feature extraction in the conventional approaches, the proposed approach is able to provide an effective and flexible framework to examine the depth motion maps in multiple projected directions . The proposed approach is evaluated in the well-known MSRA action and gesture video benchmark datasets to demonstrate its superior performance.
doi_str_mv 10.1007/s12652-018-1136-1
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2922684291</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2922684291</sourcerecordid><originalsourceid>FETCH-LOGICAL-c2311-4cad36993f02df0e1c3c13cc43e2b4765633f3b36fb66542bd7f5d8c602083163</originalsourceid><addsrcrecordid>eNp1ULtOxDAQtBBInI77ALpI1AavN3GS8nTiAOkkGqitxI-QKC_spODvcS4IKrbZ18zsagi5BXYPjKUPHrhIOGWQUQAUFC7IBjKR0QTi5PK3xvSa7LxvWAjMEQA25LhXUz30kTNqqPr6XM--7quom9upproOm2VatNHohiY0RkfajNNH1A1neFeM_oZc2aL1ZveTt-T9-Ph2eKan16eXw_5EFQ_3aKwKjSLP0TKuLTOgUAEqFaPhZZyKRCBaLFHYUogk5qVObaIzJRhnGYLALblbdcMvn7Pxk2yG2YXnvOQ55yKLeQ4BBStKucF7Z6wcXd0V7ksCk4tjcnVMBsfk4phcOHzl-IDtK-P-lP8nfQP31W0w</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2922684291</pqid></control><display><type>article</type><title>Action recognition using multi-directional projected depth motion maps</title><source>ProQuest Central UK/Ireland</source><source>SpringerLink Journals - AutoHoldings</source><source>ProQuest Central</source><creator>Satyamurthi, Sowndarya ; Tian, Jing ; Chua, Matthew Chin Heng</creator><creatorcontrib>Satyamurthi, Sowndarya ; Tian, Jing ; Chua, Matthew Chin Heng</creatorcontrib><description>Camera-based action recognition plays a key role in diverse computer vision applications such as human computer interaction. This paper proposes a new action recognition approach using multi-directional projected depth motion map based motion descriptors. First, for the input depth video sequence, all the depth frames in the video are projected onto multiple planes to form the projected images. The absolute difference between two consecutive projected images is accumulated through the entire depth video for establishing maps from multiple views. Then, the local motion consistency of the map is examined to form a histogram of local binary patterns, which are then concatenated and further incorporated into a kernel-based extreme learning machine for action recognition. In contrast to that only three directions are used to calculated the projected depth images for motion feature extraction in the conventional approaches, the proposed approach is able to provide an effective and flexible framework to examine the depth motion maps in multiple projected directions . The proposed approach is evaluated in the well-known MSRA action and gesture video benchmark datasets to demonstrate its superior performance.</description><identifier>ISSN: 1868-5137</identifier><identifier>EISSN: 1868-5145</identifier><identifier>DOI: 10.1007/s12652-018-1136-1</identifier><language>eng</language><publisher>Berlin/Heidelberg: Springer Berlin Heidelberg</publisher><subject>Activity recognition ; Artificial Intelligence ; Artificial neural networks ; Computational Intelligence ; Computer vision ; Energy ; Engineering ; Feature extraction ; Machine learning ; Neural networks ; Original Research ; Robotics and Automation ; User Interfaces and Human Computer Interaction</subject><ispartof>Journal of ambient intelligence and humanized computing, 2023-11, Vol.14 (11), p.14767-14773</ispartof><rights>Springer-Verlag GmbH Germany, part of Springer Nature 2018</rights><rights>Springer-Verlag GmbH Germany, part of Springer Nature 2018.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c2311-4cad36993f02df0e1c3c13cc43e2b4765633f3b36fb66542bd7f5d8c602083163</citedby><cites>FETCH-LOGICAL-c2311-4cad36993f02df0e1c3c13cc43e2b4765633f3b36fb66542bd7f5d8c602083163</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s12652-018-1136-1$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://www.proquest.com/docview/2922684291?pq-origsite=primo$$EHTML$$P50$$Gproquest$$H</linktohtml><link.rule.ids>314,780,784,21388,27924,27925,33744,41488,42557,43805,51319,64385,64389,72469</link.rule.ids></links><search><creatorcontrib>Satyamurthi, Sowndarya</creatorcontrib><creatorcontrib>Tian, Jing</creatorcontrib><creatorcontrib>Chua, Matthew Chin Heng</creatorcontrib><title>Action recognition using multi-directional projected depth motion maps</title><title>Journal of ambient intelligence and humanized computing</title><addtitle>J Ambient Intell Human Comput</addtitle><description>Camera-based action recognition plays a key role in diverse computer vision applications such as human computer interaction. This paper proposes a new action recognition approach using multi-directional projected depth motion map based motion descriptors. First, for the input depth video sequence, all the depth frames in the video are projected onto multiple planes to form the projected images. The absolute difference between two consecutive projected images is accumulated through the entire depth video for establishing maps from multiple views. Then, the local motion consistency of the map is examined to form a histogram of local binary patterns, which are then concatenated and further incorporated into a kernel-based extreme learning machine for action recognition. In contrast to that only three directions are used to calculated the projected depth images for motion feature extraction in the conventional approaches, the proposed approach is able to provide an effective and flexible framework to examine the depth motion maps in multiple projected directions . The proposed approach is evaluated in the well-known MSRA action and gesture video benchmark datasets to demonstrate its superior performance.</description><subject>Activity recognition</subject><subject>Artificial Intelligence</subject><subject>Artificial neural networks</subject><subject>Computational Intelligence</subject><subject>Computer vision</subject><subject>Energy</subject><subject>Engineering</subject><subject>Feature extraction</subject><subject>Machine learning</subject><subject>Neural networks</subject><subject>Original Research</subject><subject>Robotics and Automation</subject><subject>User Interfaces and Human Computer Interaction</subject><issn>1868-5137</issn><issn>1868-5145</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><sourceid>GNUQQ</sourceid><recordid>eNp1ULtOxDAQtBBInI77ALpI1AavN3GS8nTiAOkkGqitxI-QKC_spODvcS4IKrbZ18zsagi5BXYPjKUPHrhIOGWQUQAUFC7IBjKR0QTi5PK3xvSa7LxvWAjMEQA25LhXUz30kTNqqPr6XM--7quom9upproOm2VatNHohiY0RkfajNNH1A1neFeM_oZc2aL1ZveTt-T9-Ph2eKan16eXw_5EFQ_3aKwKjSLP0TKuLTOgUAEqFaPhZZyKRCBaLFHYUogk5qVObaIzJRhnGYLALblbdcMvn7Pxk2yG2YXnvOQ55yKLeQ4BBStKucF7Z6wcXd0V7ksCk4tjcnVMBsfk4phcOHzl-IDtK-P-lP8nfQP31W0w</recordid><startdate>20231101</startdate><enddate>20231101</enddate><creator>Satyamurthi, Sowndarya</creator><creator>Tian, Jing</creator><creator>Chua, Matthew Chin Heng</creator><general>Springer Berlin Heidelberg</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><scope>8FE</scope><scope>8FG</scope><scope>AFKRA</scope><scope>ARAPS</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>GNUQQ</scope><scope>HCIFZ</scope><scope>JQ2</scope><scope>K7-</scope><scope>P5Z</scope><scope>P62</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope></search><sort><creationdate>20231101</creationdate><title>Action recognition using multi-directional projected depth motion maps</title><author>Satyamurthi, Sowndarya ; Tian, Jing ; Chua, Matthew Chin Heng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c2311-4cad36993f02df0e1c3c13cc43e2b4765633f3b36fb66542bd7f5d8c602083163</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Activity recognition</topic><topic>Artificial Intelligence</topic><topic>Artificial neural networks</topic><topic>Computational Intelligence</topic><topic>Computer vision</topic><topic>Energy</topic><topic>Engineering</topic><topic>Feature extraction</topic><topic>Machine learning</topic><topic>Neural networks</topic><topic>Original Research</topic><topic>Robotics and Automation</topic><topic>User Interfaces and Human Computer Interaction</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Satyamurthi, Sowndarya</creatorcontrib><creatorcontrib>Tian, Jing</creatorcontrib><creatorcontrib>Chua, Matthew Chin Heng</creatorcontrib><collection>CrossRef</collection><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>ProQuest Central UK/Ireland</collection><collection>Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>ProQuest Central Student</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Computer Science Collection</collection><collection>Computer Science Database</collection><collection>Advanced Technologies &amp; Aerospace Database</collection><collection>ProQuest Advanced Technologies &amp; Aerospace Collection</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><jtitle>Journal of ambient intelligence and humanized computing</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Satyamurthi, Sowndarya</au><au>Tian, Jing</au><au>Chua, Matthew Chin Heng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Action recognition using multi-directional projected depth motion maps</atitle><jtitle>Journal of ambient intelligence and humanized computing</jtitle><stitle>J Ambient Intell Human Comput</stitle><date>2023-11-01</date><risdate>2023</risdate><volume>14</volume><issue>11</issue><spage>14767</spage><epage>14773</epage><pages>14767-14773</pages><issn>1868-5137</issn><eissn>1868-5145</eissn><abstract>Camera-based action recognition plays a key role in diverse computer vision applications such as human computer interaction. This paper proposes a new action recognition approach using multi-directional projected depth motion map based motion descriptors. First, for the input depth video sequence, all the depth frames in the video are projected onto multiple planes to form the projected images. The absolute difference between two consecutive projected images is accumulated through the entire depth video for establishing maps from multiple views. Then, the local motion consistency of the map is examined to form a histogram of local binary patterns, which are then concatenated and further incorporated into a kernel-based extreme learning machine for action recognition. In contrast to that only three directions are used to calculated the projected depth images for motion feature extraction in the conventional approaches, the proposed approach is able to provide an effective and flexible framework to examine the depth motion maps in multiple projected directions . The proposed approach is evaluated in the well-known MSRA action and gesture video benchmark datasets to demonstrate its superior performance.</abstract><cop>Berlin/Heidelberg</cop><pub>Springer Berlin Heidelberg</pub><doi>10.1007/s12652-018-1136-1</doi><tpages>7</tpages></addata></record>
fulltext fulltext
identifier ISSN: 1868-5137
ispartof Journal of ambient intelligence and humanized computing, 2023-11, Vol.14 (11), p.14767-14773
issn 1868-5137
1868-5145
language eng
recordid cdi_proquest_journals_2922684291
source ProQuest Central UK/Ireland; SpringerLink Journals - AutoHoldings; ProQuest Central
subjects Activity recognition
Artificial Intelligence
Artificial neural networks
Computational Intelligence
Computer vision
Energy
Engineering
Feature extraction
Machine learning
Neural networks
Original Research
Robotics and Automation
User Interfaces and Human Computer Interaction
title Action recognition using multi-directional projected depth motion maps
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T23%3A16%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Action%20recognition%20using%20multi-directional%20projected%20depth%20motion%20maps&rft.jtitle=Journal%20of%20ambient%20intelligence%20and%20humanized%20computing&rft.au=Satyamurthi,%20Sowndarya&rft.date=2023-11-01&rft.volume=14&rft.issue=11&rft.spage=14767&rft.epage=14773&rft.pages=14767-14773&rft.issn=1868-5137&rft.eissn=1868-5145&rft_id=info:doi/10.1007/s12652-018-1136-1&rft_dat=%3Cproquest_cross%3E2922684291%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2922684291&rft_id=info:pmid/&rfr_iscdi=true