Deep Learning for Ballroom Dance Recognition: A Temporal and Trajectory-Aware Classification Model With Three-Dimensional Pose Estimation and Wearable Sensing

Dance performance recognition methods have been investigated and shown various applications such as picture-pose evaluation and synchronizing foot timing and direction. However, detailed analysis and feedback are still missing. To provide them, understanding the performance by component level is nec...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE sensors journal 2021-11, Vol.21 (22), p.25437-25448
Hauptverfasser: Matsuyama, Hitoshi, Aoki, Shunsuke, Yonezawa, Takuro, Hiroi, Kei, Kaji, Katsuhiko, Kawaguchi, Nobuo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 25448
container_issue 22
container_start_page 25437
container_title IEEE sensors journal
container_volume 21
creator Matsuyama, Hitoshi
Aoki, Shunsuke
Yonezawa, Takuro
Hiroi, Kei
Kaji, Katsuhiko
Kawaguchi, Nobuo
description Dance performance recognition methods have been investigated and shown various applications such as picture-pose evaluation and synchronizing foot timing and direction. However, detailed analysis and feedback are still missing. To provide them, understanding the performance by component level is necessary. Specifically, we formulate it as a dance-figure classification problem using three-dimensional body joints and wearable sensors. Our model is based on long short-term memory (LSTM) and includes the temporal and trajectory-wise structure that uses the trajectory information in a timestep and the temporal masking module. As a result, we achieved 93% accuracy with our proposed method, which is highly overwhelming the baseline result (84.7%) and very close to the accuracy of the experienced dancers (93.6%). We have made the dataset of ballroom dance performance dataset open to researchers to develop the activity recognition field further.
doi_str_mv 10.1109/JSEN.2021.3098744
format Article
fullrecord <record><control><sourceid>proquest_RIE</sourceid><recordid>TN_cdi_crossref_primary_10_1109_JSEN_2021_3098744</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>9492117</ieee_id><sourcerecordid>2596784905</sourcerecordid><originalsourceid>FETCH-LOGICAL-c223t-20819de02c2629155dd8b5796a70535ae74ecad2fdd167e404026f403398b6fa3</originalsourceid><addsrcrecordid>eNo9kVtLw0AQhRdRsFZ_gPiy4HPq3pLN-lbbeqNesJX6FrbJpN2SZOtuivTP-FtNSPFpBuY7hzMchC4pGVBK1M3zbPI6YITRAScqlkIcoR4NwzigUsTH7c5JILj8OkVn3m8IoUqGsod-xwBbPAXtKlOtcG4dvtNF4awt8VhXKeAPSO2qMrWx1S0e4jmUW-t0gXWV4bnTG0hr6_bB8Ec7wKNCe29yk-qWxy82gwIvTL3G87UDCMamhMo3p8bg3XrAE1-bsoNbw0UTRC8LwLMWq1bn6CTXhYeLw-yjz_vJfPQYTN8enkbDaZAyxuuAkZiqDAhLWcRU83eWxctQqkhLEvJQgxSQ6ozlWUYjCYIIwqJcEM5VvIxyzfvouvPdOvu9A18nG7tzTUqfsFBFMhaqMeoj2lGps947yJOta9K7fUJJ0taQtDUkbQ3JoYZGc9VpDAD880ooRqnkf6KUhTc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2596784905</pqid></control><display><type>article</type><title>Deep Learning for Ballroom Dance Recognition: A Temporal and Trajectory-Aware Classification Model With Three-Dimensional Pose Estimation and Wearable Sensing</title><source>IEEE Xplore</source><creator>Matsuyama, Hitoshi ; Aoki, Shunsuke ; Yonezawa, Takuro ; Hiroi, Kei ; Kaji, Katsuhiko ; Kawaguchi, Nobuo</creator><creatorcontrib>Matsuyama, Hitoshi ; Aoki, Shunsuke ; Yonezawa, Takuro ; Hiroi, Kei ; Kaji, Katsuhiko ; Kawaguchi, Nobuo</creatorcontrib><description>Dance performance recognition methods have been investigated and shown various applications such as picture-pose evaluation and synchronizing foot timing and direction. However, detailed analysis and feedback are still missing. To provide them, understanding the performance by component level is necessary. Specifically, we formulate it as a dance-figure classification problem using three-dimensional body joints and wearable sensors. Our model is based on long short-term memory (LSTM) and includes the temporal and trajectory-wise structure that uses the trajectory information in a timestep and the temporal masking module. As a result, we achieved 93% accuracy with our proposed method, which is highly overwhelming the baseline result (84.7%) and very close to the accuracy of the experienced dancers (93.6%). We have made the dataset of ballroom dance performance dataset open to researchers to develop the activity recognition field further.</description><identifier>ISSN: 1530-437X</identifier><identifier>EISSN: 1558-1748</identifier><identifier>DOI: 10.1109/JSEN.2021.3098744</identifier><identifier>CODEN: ISJEAZ</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Activity recognition ; Ballroom dancing ; Classification ; Dance ; Datasets ; image motion analysis ; Machine learning ; Mirrors ; neural networks ; Pose estimation ; sensor systems and applications ; Sensors ; Sports ; Synchronism ; Three dimensional bodies ; Three dimensional models ; Three-dimensional displays ; Trajectory ; Wearable sensors ; Wearable technology</subject><ispartof>IEEE sensors journal, 2021-11, Vol.21 (22), p.25437-25448</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c223t-20819de02c2629155dd8b5796a70535ae74ecad2fdd167e404026f403398b6fa3</citedby><cites>FETCH-LOGICAL-c223t-20819de02c2629155dd8b5796a70535ae74ecad2fdd167e404026f403398b6fa3</cites><orcidid>0000-0001-8783-4048 ; 0000-0002-3331-2778</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/9492117$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,796,27924,27925,54758</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/9492117$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Matsuyama, Hitoshi</creatorcontrib><creatorcontrib>Aoki, Shunsuke</creatorcontrib><creatorcontrib>Yonezawa, Takuro</creatorcontrib><creatorcontrib>Hiroi, Kei</creatorcontrib><creatorcontrib>Kaji, Katsuhiko</creatorcontrib><creatorcontrib>Kawaguchi, Nobuo</creatorcontrib><title>Deep Learning for Ballroom Dance Recognition: A Temporal and Trajectory-Aware Classification Model With Three-Dimensional Pose Estimation and Wearable Sensing</title><title>IEEE sensors journal</title><addtitle>JSEN</addtitle><description>Dance performance recognition methods have been investigated and shown various applications such as picture-pose evaluation and synchronizing foot timing and direction. However, detailed analysis and feedback are still missing. To provide them, understanding the performance by component level is necessary. Specifically, we formulate it as a dance-figure classification problem using three-dimensional body joints and wearable sensors. Our model is based on long short-term memory (LSTM) and includes the temporal and trajectory-wise structure that uses the trajectory information in a timestep and the temporal masking module. As a result, we achieved 93% accuracy with our proposed method, which is highly overwhelming the baseline result (84.7%) and very close to the accuracy of the experienced dancers (93.6%). We have made the dataset of ballroom dance performance dataset open to researchers to develop the activity recognition field further.</description><subject>Activity recognition</subject><subject>Ballroom dancing</subject><subject>Classification</subject><subject>Dance</subject><subject>Datasets</subject><subject>image motion analysis</subject><subject>Machine learning</subject><subject>Mirrors</subject><subject>neural networks</subject><subject>Pose estimation</subject><subject>sensor systems and applications</subject><subject>Sensors</subject><subject>Sports</subject><subject>Synchronism</subject><subject>Three dimensional bodies</subject><subject>Three dimensional models</subject><subject>Three-dimensional displays</subject><subject>Trajectory</subject><subject>Wearable sensors</subject><subject>Wearable technology</subject><issn>1530-437X</issn><issn>1558-1748</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>RIE</sourceid><recordid>eNo9kVtLw0AQhRdRsFZ_gPiy4HPq3pLN-lbbeqNesJX6FrbJpN2SZOtuivTP-FtNSPFpBuY7hzMchC4pGVBK1M3zbPI6YITRAScqlkIcoR4NwzigUsTH7c5JILj8OkVn3m8IoUqGsod-xwBbPAXtKlOtcG4dvtNF4awt8VhXKeAPSO2qMrWx1S0e4jmUW-t0gXWV4bnTG0hr6_bB8Ec7wKNCe29yk-qWxy82gwIvTL3G87UDCMamhMo3p8bg3XrAE1-bsoNbw0UTRC8LwLMWq1bn6CTXhYeLw-yjz_vJfPQYTN8enkbDaZAyxuuAkZiqDAhLWcRU83eWxctQqkhLEvJQgxSQ6ozlWUYjCYIIwqJcEM5VvIxyzfvouvPdOvu9A18nG7tzTUqfsFBFMhaqMeoj2lGps947yJOta9K7fUJJ0taQtDUkbQ3JoYZGc9VpDAD880ooRqnkf6KUhTc</recordid><startdate>20211115</startdate><enddate>20211115</enddate><creator>Matsuyama, Hitoshi</creator><creator>Aoki, Shunsuke</creator><creator>Yonezawa, Takuro</creator><creator>Hiroi, Kei</creator><creator>Kaji, Katsuhiko</creator><creator>Kawaguchi, Nobuo</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0001-8783-4048</orcidid><orcidid>https://orcid.org/0000-0002-3331-2778</orcidid></search><sort><creationdate>20211115</creationdate><title>Deep Learning for Ballroom Dance Recognition: A Temporal and Trajectory-Aware Classification Model With Three-Dimensional Pose Estimation and Wearable Sensing</title><author>Matsuyama, Hitoshi ; Aoki, Shunsuke ; Yonezawa, Takuro ; Hiroi, Kei ; Kaji, Katsuhiko ; Kawaguchi, Nobuo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c223t-20819de02c2629155dd8b5796a70535ae74ecad2fdd167e404026f403398b6fa3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Activity recognition</topic><topic>Ballroom dancing</topic><topic>Classification</topic><topic>Dance</topic><topic>Datasets</topic><topic>image motion analysis</topic><topic>Machine learning</topic><topic>Mirrors</topic><topic>neural networks</topic><topic>Pose estimation</topic><topic>sensor systems and applications</topic><topic>Sensors</topic><topic>Sports</topic><topic>Synchronism</topic><topic>Three dimensional bodies</topic><topic>Three dimensional models</topic><topic>Three-dimensional displays</topic><topic>Trajectory</topic><topic>Wearable sensors</topic><topic>Wearable technology</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Matsuyama, Hitoshi</creatorcontrib><creatorcontrib>Aoki, Shunsuke</creatorcontrib><creatorcontrib>Yonezawa, Takuro</creatorcontrib><creatorcontrib>Hiroi, Kei</creatorcontrib><creatorcontrib>Kaji, Katsuhiko</creatorcontrib><creatorcontrib>Kawaguchi, Nobuo</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE sensors journal</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Matsuyama, Hitoshi</au><au>Aoki, Shunsuke</au><au>Yonezawa, Takuro</au><au>Hiroi, Kei</au><au>Kaji, Katsuhiko</au><au>Kawaguchi, Nobuo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Learning for Ballroom Dance Recognition: A Temporal and Trajectory-Aware Classification Model With Three-Dimensional Pose Estimation and Wearable Sensing</atitle><jtitle>IEEE sensors journal</jtitle><stitle>JSEN</stitle><date>2021-11-15</date><risdate>2021</risdate><volume>21</volume><issue>22</issue><spage>25437</spage><epage>25448</epage><pages>25437-25448</pages><issn>1530-437X</issn><eissn>1558-1748</eissn><coden>ISJEAZ</coden><abstract>Dance performance recognition methods have been investigated and shown various applications such as picture-pose evaluation and synchronizing foot timing and direction. However, detailed analysis and feedback are still missing. To provide them, understanding the performance by component level is necessary. Specifically, we formulate it as a dance-figure classification problem using three-dimensional body joints and wearable sensors. Our model is based on long short-term memory (LSTM) and includes the temporal and trajectory-wise structure that uses the trajectory information in a timestep and the temporal masking module. As a result, we achieved 93% accuracy with our proposed method, which is highly overwhelming the baseline result (84.7%) and very close to the accuracy of the experienced dancers (93.6%). We have made the dataset of ballroom dance performance dataset open to researchers to develop the activity recognition field further.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/JSEN.2021.3098744</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0001-8783-4048</orcidid><orcidid>https://orcid.org/0000-0002-3331-2778</orcidid></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1530-437X
ispartof IEEE sensors journal, 2021-11, Vol.21 (22), p.25437-25448
issn 1530-437X
1558-1748
language eng
recordid cdi_crossref_primary_10_1109_JSEN_2021_3098744
source IEEE Xplore
subjects Activity recognition
Ballroom dancing
Classification
Dance
Datasets
image motion analysis
Machine learning
Mirrors
neural networks
Pose estimation
sensor systems and applications
Sensors
Sports
Synchronism
Three dimensional bodies
Three dimensional models
Three-dimensional displays
Trajectory
Wearable sensors
Wearable technology
title Deep Learning for Ballroom Dance Recognition: A Temporal and Trajectory-Aware Classification Model With Three-Dimensional Pose Estimation and Wearable Sensing
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-25T11%3A02%3A04IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_RIE&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Learning%20for%20Ballroom%20Dance%20Recognition:%20A%20Temporal%20and%20Trajectory-Aware%20Classification%20Model%20With%20Three-Dimensional%20Pose%20Estimation%20and%20Wearable%20Sensing&rft.jtitle=IEEE%20sensors%20journal&rft.au=Matsuyama,%20Hitoshi&rft.date=2021-11-15&rft.volume=21&rft.issue=22&rft.spage=25437&rft.epage=25448&rft.pages=25437-25448&rft.issn=1530-437X&rft.eissn=1558-1748&rft.coden=ISJEAZ&rft_id=info:doi/10.1109/JSEN.2021.3098744&rft_dat=%3Cproquest_RIE%3E2596784905%3C/proquest_RIE%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2596784905&rft_id=info:pmid/&rft_ieee_id=9492117&rfr_iscdi=true