Learning Music-Dance Representations through Explicit-Implicit Rhythm Synchronization
Although audio-visual representation has been proved to be applicable in many downstream tasks, the representation of dancing videos, which is more specific and always accompanied by music with complex auditory contents, remains challenging and uninvestigated. Considering the intrinsic alignment bet...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-08 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Yu, Jiashuo Pu, Junfu Cheng, Ying Feng, Rui Shan, Ying |
description | Although audio-visual representation has been proved to be applicable in many downstream tasks, the representation of dancing videos, which is more specific and always accompanied by music with complex auditory contents, remains challenging and uninvestigated. Considering the intrinsic alignment between the cadent movement of dancer and music rhythm, we introduce MuDaR, a novel Music-Dance Representation learning framework to perform the synchronization of music and dance rhythms both in explicit and implicit ways. Specifically, we derive the dance rhythms based on visual appearance and motion cues inspired by the music rhythm analysis. Then the visual rhythms are temporally aligned with the music counterparts, which are extracted by the amplitude of sound intensity. Meanwhile, we exploit the implicit coherence of rhythms implied in audio and visual streams by contrastive learning. The model learns the joint embedding by predicting the temporal consistency between audio-visual pairs. The music-dance representation, together with the capability of detecting audio and visual rhythms, can further be applied to three downstream tasks: (a) dance classification, (b) music-dance retrieval, and (c) music-dance retargeting. Extensive experiments demonstrate that our proposed framework outperforms other self-supervised methods by a large margin. |
doi_str_mv | 10.48550/arxiv.2207.03190 |
format | Article |
fullrecord | <record><control><sourceid>proquest_arxiv</sourceid><recordid>TN_cdi_arxiv_primary_2207_03190</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2686416162</sourcerecordid><originalsourceid>FETCH-LOGICAL-a950-45127104769ba658e6b55f9bb171a6e375ed8c3984b401dbe2a6cb974e6481f03</originalsourceid><addsrcrecordid>eNotj11LwzAYhYMgOOZ-gFcWvO7Md9JLmdMNKsKc1yXp0jVjTWvSyuqvt7ZevefiOYf3AeAOwSWVjMFH5S_2e4kxFEtIUAKvwAwTgmJJMb4BixBOEELMBWaMzMBnapR31h2jty7YPH5WLjfRzjTeBONa1drahagtfd0dy2h9ac42t228raYQ7cq-Lavoo3f5wDj7MzZuwXWhzsEs_u8c7F_W-9UmTt9ft6unNFYJgzFlCAsEqeCJVpxJwzVjRaI1EkhxQwQzB5mTRFJNITpogxXPdSKo4VSiApI5uJ9mR-es8bZSvs_-3LPRfSAeJqLx9VdnQpud6s674acMc8kp4ohj8gv32Fzk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2686416162</pqid></control><display><type>article</type><title>Learning Music-Dance Representations through Explicit-Implicit Rhythm Synchronization</title><source>arXiv.org</source><source>Free E- Journals</source><creator>Yu, Jiashuo ; Pu, Junfu ; Cheng, Ying ; Feng, Rui ; Shan, Ying</creator><creatorcontrib>Yu, Jiashuo ; Pu, Junfu ; Cheng, Ying ; Feng, Rui ; Shan, Ying</creatorcontrib><description>Although audio-visual representation has been proved to be applicable in many downstream tasks, the representation of dancing videos, which is more specific and always accompanied by music with complex auditory contents, remains challenging and uninvestigated. Considering the intrinsic alignment between the cadent movement of dancer and music rhythm, we introduce MuDaR, a novel Music-Dance Representation learning framework to perform the synchronization of music and dance rhythms both in explicit and implicit ways. Specifically, we derive the dance rhythms based on visual appearance and motion cues inspired by the music rhythm analysis. Then the visual rhythms are temporally aligned with the music counterparts, which are extracted by the amplitude of sound intensity. Meanwhile, we exploit the implicit coherence of rhythms implied in audio and visual streams by contrastive learning. The model learns the joint embedding by predicting the temporal consistency between audio-visual pairs. The music-dance representation, together with the capability of detecting audio and visual rhythms, can further be applied to three downstream tasks: (a) dance classification, (b) music-dance retrieval, and (c) music-dance retargeting. Extensive experiments demonstrate that our proposed framework outperforms other self-supervised methods by a large margin.</description><identifier>EISSN: 2331-8422</identifier><identifier>DOI: 10.48550/arxiv.2207.03190</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia ; Computer Science - Sound ; Dance ; Music ; Representations ; Rhythm ; Self-supervised learning ; Sound intensity ; Supervised learning ; Synchronism</subject><ispartof>arXiv.org, 2023-08</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,780,881,27904</link.rule.ids><backlink>$$Uhttps://doi.org/10.48550/arXiv.2207.03190$$DView paper in arXiv$$Hfree_for_read</backlink><backlink>$$Uhttps://doi.org/10.1109/TMM.2023.3303690$$DView published paper (Access to full text may be restricted)$$Hfree_for_read</backlink></links><search><creatorcontrib>Yu, Jiashuo</creatorcontrib><creatorcontrib>Pu, Junfu</creatorcontrib><creatorcontrib>Cheng, Ying</creatorcontrib><creatorcontrib>Feng, Rui</creatorcontrib><creatorcontrib>Shan, Ying</creatorcontrib><title>Learning Music-Dance Representations through Explicit-Implicit Rhythm Synchronization</title><title>arXiv.org</title><description>Although audio-visual representation has been proved to be applicable in many downstream tasks, the representation of dancing videos, which is more specific and always accompanied by music with complex auditory contents, remains challenging and uninvestigated. Considering the intrinsic alignment between the cadent movement of dancer and music rhythm, we introduce MuDaR, a novel Music-Dance Representation learning framework to perform the synchronization of music and dance rhythms both in explicit and implicit ways. Specifically, we derive the dance rhythms based on visual appearance and motion cues inspired by the music rhythm analysis. Then the visual rhythms are temporally aligned with the music counterparts, which are extracted by the amplitude of sound intensity. Meanwhile, we exploit the implicit coherence of rhythms implied in audio and visual streams by contrastive learning. The model learns the joint embedding by predicting the temporal consistency between audio-visual pairs. The music-dance representation, together with the capability of detecting audio and visual rhythms, can further be applied to three downstream tasks: (a) dance classification, (b) music-dance retrieval, and (c) music-dance retargeting. Extensive experiments demonstrate that our proposed framework outperforms other self-supervised methods by a large margin.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><subject>Computer Science - Sound</subject><subject>Dance</subject><subject>Music</subject><subject>Representations</subject><subject>Rhythm</subject><subject>Self-supervised learning</subject><subject>Sound intensity</subject><subject>Supervised learning</subject><subject>Synchronism</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><sourceid>GOX</sourceid><recordid>eNotj11LwzAYhYMgOOZ-gFcWvO7Md9JLmdMNKsKc1yXp0jVjTWvSyuqvt7ZevefiOYf3AeAOwSWVjMFH5S_2e4kxFEtIUAKvwAwTgmJJMb4BixBOEELMBWaMzMBnapR31h2jty7YPH5WLjfRzjTeBONa1drahagtfd0dy2h9ac42t228raYQ7cq-Lavoo3f5wDj7MzZuwXWhzsEs_u8c7F_W-9UmTt9ft6unNFYJgzFlCAsEqeCJVpxJwzVjRaI1EkhxQwQzB5mTRFJNITpogxXPdSKo4VSiApI5uJ9mR-es8bZSvs_-3LPRfSAeJqLx9VdnQpud6s674acMc8kp4ohj8gv32Fzk</recordid><startdate>20230810</startdate><enddate>20230810</enddate><creator>Yu, Jiashuo</creator><creator>Pu, Junfu</creator><creator>Cheng, Ying</creator><creator>Feng, Rui</creator><creator>Shan, Ying</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PTHSS</scope><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230810</creationdate><title>Learning Music-Dance Representations through Explicit-Implicit Rhythm Synchronization</title><author>Yu, Jiashuo ; Pu, Junfu ; Cheng, Ying ; Feng, Rui ; Shan, Ying</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a950-45127104769ba658e6b55f9bb171a6e375ed8c3984b401dbe2a6cb974e6481f03</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><topic>Computer Science - Sound</topic><topic>Dance</topic><topic>Music</topic><topic>Representations</topic><topic>Rhythm</topic><topic>Self-supervised learning</topic><topic>Sound intensity</topic><topic>Supervised learning</topic><topic>Synchronism</topic><toplevel>online_resources</toplevel><creatorcontrib>Yu, Jiashuo</creatorcontrib><creatorcontrib>Pu, Junfu</creatorcontrib><creatorcontrib>Cheng, Ying</creatorcontrib><creatorcontrib>Feng, Rui</creatorcontrib><creatorcontrib>Shan, Ying</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>Engineering Collection</collection><collection>arXiv Computer Science</collection><collection>arXiv.org</collection><jtitle>arXiv.org</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu, Jiashuo</au><au>Pu, Junfu</au><au>Cheng, Ying</au><au>Feng, Rui</au><au>Shan, Ying</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning Music-Dance Representations through Explicit-Implicit Rhythm Synchronization</atitle><jtitle>arXiv.org</jtitle><date>2023-08-10</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Although audio-visual representation has been proved to be applicable in many downstream tasks, the representation of dancing videos, which is more specific and always accompanied by music with complex auditory contents, remains challenging and uninvestigated. Considering the intrinsic alignment between the cadent movement of dancer and music rhythm, we introduce MuDaR, a novel Music-Dance Representation learning framework to perform the synchronization of music and dance rhythms both in explicit and implicit ways. Specifically, we derive the dance rhythms based on visual appearance and motion cues inspired by the music rhythm analysis. Then the visual rhythms are temporally aligned with the music counterparts, which are extracted by the amplitude of sound intensity. Meanwhile, we exploit the implicit coherence of rhythms implied in audio and visual streams by contrastive learning. The model learns the joint embedding by predicting the temporal consistency between audio-visual pairs. The music-dance representation, together with the capability of detecting audio and visual rhythms, can further be applied to three downstream tasks: (a) dance classification, (b) music-dance retrieval, and (c) music-dance retargeting. Extensive experiments demonstrate that our proposed framework outperforms other self-supervised methods by a large margin.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><doi>10.48550/arxiv.2207.03190</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-08 |
issn | 2331-8422 |
language | eng |
recordid | cdi_arxiv_primary_2207_03190 |
source | arXiv.org; Free E- Journals |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Multimedia Computer Science - Sound Dance Music Representations Rhythm Self-supervised learning Sound intensity Supervised learning Synchronism |
title | Learning Music-Dance Representations through Explicit-Implicit Rhythm Synchronization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-27T02%3A30%3A30IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_arxiv&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20Music-Dance%20Representations%20through%20Explicit-Implicit%20Rhythm%20Synchronization&rft.jtitle=arXiv.org&rft.au=Yu,%20Jiashuo&rft.date=2023-08-10&rft.eissn=2331-8422&rft_id=info:doi/10.48550/arxiv.2207.03190&rft_dat=%3Cproquest_arxiv%3E2686416162%3C/proquest_arxiv%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2686416162&rft_id=info:pmid/&rfr_iscdi=true |