Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics

Generating dance movements from music has been considered a highly challenging task, as it requires the model to comprehend concepts from two different modalities: audio and video. However, recently, research on dance generation based on deep learning has been actively conducted. Existing dance gene...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2024, Vol.12, p.94152-94163
Hauptverfasser: Kim, Seohyun, Lee, Kyogu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page 94163
container_issue
container_start_page 94152
container_title IEEE access
container_volume 12
creator Kim, Seohyun
Lee, Kyogu
description Generating dance movements from music has been considered a highly challenging task, as it requires the model to comprehend concepts from two different modalities: audio and video. However, recently, research on dance generation based on deep learning has been actively conducted. Existing dance generation researches tend to focus on generating dances in limited genres or for single dancer, so when K-pop music that mixes multiple genres was applied to existing methods, they failed to generate dances of various genres or group dances. In this paper, we propose the K-pop dance generation model in an autoregressive manner, a system designed to generate two-person synchronous dances based on K-pop music. To achieve this, we created a dataset by collecting videos of multiple dancers simultaneously dancing to K-pop music and dancing in various genres. Generating synchronous dances has two meanings: one is to generate a dance that goes well with the input music and dance when both are given, and the other is to simultaneously generate multiple dances that match the given music. We call them secondary dance generation and group dance generation, respectively, and designed the proposed model, which can perform both two generation methods. In addition, we would like to propose additional learning methods to make a model that better generates synchronous dances. To assess the performance of the proposed model, both qualitative and quantitative evaluations are conducted, proving the effectiveness and suitability of the proposed model when generating synchronous dances for K-pop music.
doi_str_mv 10.1109/ACCESS.2024.3420433
format Article
fullrecord <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2024_3420433</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10577092</ieee_id><doaj_id>oai_doaj_org_article_3de0fc1b924e436ca55b8883e835ea23</doaj_id><sourcerecordid>3079408258</sourcerecordid><originalsourceid>FETCH-LOGICAL-c289t-605880d23854b78ac4ac0f706aa80047113a507642d81dafd6f65a6b92d0c7ea3</originalsourceid><addsrcrecordid>eNpNkV9rFDEUxQdRsNR-An0Y8HnWm_-ZxzKttVhRWH0Od5M7u7OsyZrMCv32pjtFel9yOZzfyYXTNO8ZrBiD_tP1MNyu1ysOXK6E5CCFeNVccKb7TiihX7_Y3zZXpeyhjq2SMhfN9tupTL67ydNfiu36MfpdTjGdSnuD0VN7R5EyzlOK7ZBimQLlKW7br92PdGzPLB5ajKEddilT2mY87s7asMOMfq72Mk--vGvejHgodPX8Xja_Pt_-HL50D9_v7ofrh85z28-dBmUtBC6skhtj0Uv0MBrQiBZAGsYEKjBa8mBZwDHoUSvUm54H8IZQXDb3S25IuHfHPP3G_OgSTu4spLx1mOtBB3IiEIyeVVaSFNqjUhtrrSArFCEXNevjknXM6c-Jyuz26ZRjPd8JML0Ey5WtLrG4fE6lZBr__8rAPRXkloLcU0HuuaBKfVioiYheEMoY6Ln4Bww3jFY</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3079408258</pqid></control><display><type>article</type><title>Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics</title><source>IEEE Open Access Journals</source><source>DOAJ Directory of Open Access Journals</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Kim, Seohyun ; Lee, Kyogu</creator><creatorcontrib>Kim, Seohyun ; Lee, Kyogu</creatorcontrib><description>Generating dance movements from music has been considered a highly challenging task, as it requires the model to comprehend concepts from two different modalities: audio and video. However, recently, research on dance generation based on deep learning has been actively conducted. Existing dance generation researches tend to focus on generating dances in limited genres or for single dancer, so when K-pop music that mixes multiple genres was applied to existing methods, they failed to generate dances of various genres or group dances. In this paper, we propose the K-pop dance generation model in an autoregressive manner, a system designed to generate two-person synchronous dances based on K-pop music. To achieve this, we created a dataset by collecting videos of multiple dancers simultaneously dancing to K-pop music and dancing in various genres. Generating synchronous dances has two meanings: one is to generate a dance that goes well with the input music and dance when both are given, and the other is to simultaneously generate multiple dances that match the given music. We call them secondary dance generation and group dance generation, respectively, and designed the proposed model, which can perform both two generation methods. In addition, we would like to propose additional learning methods to make a model that better generates synchronous dances. To assess the performance of the proposed model, both qualitative and quantitative evaluations are conducted, proving the effectiveness and suitability of the proposed model when generating synchronous dances for K-pop music.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2024.3420433</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>autoregressive model ; Dance ; Data models ; Deep learning ; Feature extraction ; Genre ; Humanities ; K-pop group dance generation ; K-pop music ; multi-step learning ; Music ; Popular music ; Synchronous dance generation ; Video on demand ; Web sites</subject><ispartof>IEEE access, 2024, Vol.12, p.94152-94163</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c289t-605880d23854b78ac4ac0f706aa80047113a507642d81dafd6f65a6b92d0c7ea3</cites><orcidid>0009-0004-7676-8312 ; 0000-0002-4210-0312</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10577092$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,780,784,864,2102,4024,27633,27923,27924,27925,54933</link.rule.ids></links><search><creatorcontrib>Kim, Seohyun</creatorcontrib><creatorcontrib>Lee, Kyogu</creatorcontrib><title>Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics</title><title>IEEE access</title><addtitle>Access</addtitle><description>Generating dance movements from music has been considered a highly challenging task, as it requires the model to comprehend concepts from two different modalities: audio and video. However, recently, research on dance generation based on deep learning has been actively conducted. Existing dance generation researches tend to focus on generating dances in limited genres or for single dancer, so when K-pop music that mixes multiple genres was applied to existing methods, they failed to generate dances of various genres or group dances. In this paper, we propose the K-pop dance generation model in an autoregressive manner, a system designed to generate two-person synchronous dances based on K-pop music. To achieve this, we created a dataset by collecting videos of multiple dancers simultaneously dancing to K-pop music and dancing in various genres. Generating synchronous dances has two meanings: one is to generate a dance that goes well with the input music and dance when both are given, and the other is to simultaneously generate multiple dances that match the given music. We call them secondary dance generation and group dance generation, respectively, and designed the proposed model, which can perform both two generation methods. In addition, we would like to propose additional learning methods to make a model that better generates synchronous dances. To assess the performance of the proposed model, both qualitative and quantitative evaluations are conducted, proving the effectiveness and suitability of the proposed model when generating synchronous dances for K-pop music.</description><subject>autoregressive model</subject><subject>Dance</subject><subject>Data models</subject><subject>Deep learning</subject><subject>Feature extraction</subject><subject>Genre</subject><subject>Humanities</subject><subject>K-pop group dance generation</subject><subject>K-pop music</subject><subject>multi-step learning</subject><subject>Music</subject><subject>Popular music</subject><subject>Synchronous dance generation</subject><subject>Video on demand</subject><subject>Web sites</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkV9rFDEUxQdRsNR-An0Y8HnWm_-ZxzKttVhRWH0Od5M7u7OsyZrMCv32pjtFel9yOZzfyYXTNO8ZrBiD_tP1MNyu1ysOXK6E5CCFeNVccKb7TiihX7_Y3zZXpeyhjq2SMhfN9tupTL67ydNfiu36MfpdTjGdSnuD0VN7R5EyzlOK7ZBimQLlKW7br92PdGzPLB5ajKEddilT2mY87s7asMOMfq72Mk--vGvejHgodPX8Xja_Pt_-HL50D9_v7ofrh85z28-dBmUtBC6skhtj0Uv0MBrQiBZAGsYEKjBa8mBZwDHoUSvUm54H8IZQXDb3S25IuHfHPP3G_OgSTu4spLx1mOtBB3IiEIyeVVaSFNqjUhtrrSArFCEXNevjknXM6c-Jyuz26ZRjPd8JML0Ey5WtLrG4fE6lZBr__8rAPRXkloLcU0HuuaBKfVioiYheEMoY6Ln4Bww3jFY</recordid><startdate>2024</startdate><enddate>2024</enddate><creator>Kim, Seohyun</creator><creator>Lee, Kyogu</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0009-0004-7676-8312</orcidid><orcidid>https://orcid.org/0000-0002-4210-0312</orcidid></search><sort><creationdate>2024</creationdate><title>Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics</title><author>Kim, Seohyun ; Lee, Kyogu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c289t-605880d23854b78ac4ac0f706aa80047113a507642d81dafd6f65a6b92d0c7ea3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>autoregressive model</topic><topic>Dance</topic><topic>Data models</topic><topic>Deep learning</topic><topic>Feature extraction</topic><topic>Genre</topic><topic>Humanities</topic><topic>K-pop group dance generation</topic><topic>K-pop music</topic><topic>multi-step learning</topic><topic>Music</topic><topic>Popular music</topic><topic>Synchronous dance generation</topic><topic>Video on demand</topic><topic>Web sites</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Kim, Seohyun</creatorcontrib><creatorcontrib>Lee, Kyogu</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Electronic Library (IEL)</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts – Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Kim, Seohyun</au><au>Lee, Kyogu</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2024</date><risdate>2024</risdate><volume>12</volume><spage>94152</spage><epage>94163</epage><pages>94152-94163</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Generating dance movements from music has been considered a highly challenging task, as it requires the model to comprehend concepts from two different modalities: audio and video. However, recently, research on dance generation based on deep learning has been actively conducted. Existing dance generation researches tend to focus on generating dances in limited genres or for single dancer, so when K-pop music that mixes multiple genres was applied to existing methods, they failed to generate dances of various genres or group dances. In this paper, we propose the K-pop dance generation model in an autoregressive manner, a system designed to generate two-person synchronous dances based on K-pop music. To achieve this, we created a dataset by collecting videos of multiple dancers simultaneously dancing to K-pop music and dancing in various genres. Generating synchronous dances has two meanings: one is to generate a dance that goes well with the input music and dance when both are given, and the other is to simultaneously generate multiple dances that match the given music. We call them secondary dance generation and group dance generation, respectively, and designed the proposed model, which can perform both two generation methods. In addition, we would like to propose additional learning methods to make a model that better generates synchronous dances. To assess the performance of the proposed model, both qualitative and quantitative evaluations are conducted, proving the effectiveness and suitability of the proposed model when generating synchronous dances for K-pop music.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2024.3420433</doi><tpages>12</tpages><orcidid>https://orcid.org/0009-0004-7676-8312</orcidid><orcidid>https://orcid.org/0000-0002-4210-0312</orcidid><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier ISSN: 2169-3536
ispartof IEEE access, 2024, Vol.12, p.94152-94163
issn 2169-3536
2169-3536
language eng
recordid cdi_crossref_primary_10_1109_ACCESS_2024_3420433
source IEEE Open Access Journals; DOAJ Directory of Open Access Journals; EZB-FREE-00999 freely available EZB journals
subjects autoregressive model
Dance
Data models
Deep learning
Feature extraction
Genre
Humanities
K-pop group dance generation
K-pop music
multi-step learning
Music
Popular music
Synchronous dance generation
Video on demand
Web sites
title Music-Driven Synchronous Dance Generation Considering K-Pop Musical and Choreographical Characteristics
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T23%3A53%3A38IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Music-Driven%20Synchronous%20Dance%20Generation%20Considering%20K-Pop%20Musical%20and%20Choreographical%20Characteristics&rft.jtitle=IEEE%20access&rft.au=Kim,%20Seohyun&rft.date=2024&rft.volume=12&rft.spage=94152&rft.epage=94163&rft.pages=94152-94163&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2024.3420433&rft_dat=%3Cproquest_cross%3E3079408258%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3079408258&rft_id=info:pmid/&rft_ieee_id=10577092&rft_doaj_id=oai_doaj_org_article_3de0fc1b924e436ca55b8883e835ea23&rfr_iscdi=true