UniMuMo: Unified Text, Music and Motion Generation
We introduce UniMuMo, a unified multimodal model capable of taking arbitrary text, music, and motion data as input conditions to generate outputs across all three modalities. To address the lack of time-synchronized data, we align unpaired music and motion data based on rhythmic patterns to leverage...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Yang, Han Su, Kun Zhang, Yutong Chen, Jiaben Qian, Kaizhi Liu, Gaowen Gan, Chuang |
description | We introduce UniMuMo, a unified multimodal model capable of taking arbitrary
text, music, and motion data as input conditions to generate outputs across all
three modalities. To address the lack of time-synchronized data, we align
unpaired music and motion data based on rhythmic patterns to leverage existing
large-scale music-only and motion-only datasets. By converting music, motion,
and text into token-based representation, our model bridges these modalities
through a unified encoder-decoder transformer architecture. To support multiple
generation tasks within a single framework, we introduce several architectural
improvements. We propose encoding motion with a music codebook, mapping motion
into the same feature space as music. We introduce a music-motion parallel
generation scheme that unifies all music and motion generation tasks into a
single transformer decoder architecture with a single training task of
music-motion joint generation. Moreover, the model is designed by fine-tuning
existing pre-trained single-modality models, significantly reducing
computational demands. Extensive experiments demonstrate that UniMuMo achieves
competitive results on all unidirectional generation benchmarks across music,
motion, and text modalities. Quantitative results are available in the
\href{https://hanyangclarence.github.io/unimumo_demo/}{project page}. |
doi_str_mv | 10.48550/arxiv.2410.04534 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_04534</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_04534</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_045343</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJiYGptwMhiF5mX6lvrmWykAGWmZqSkKIakVJToKvqXFmckKiXkpCr75JZn5eQruqXmpRYkgJg8Da1piTnEqL5TmZpB3cw1x9tAFmx5fUJSZm1hUGQ-yJR5sizFhFQBQETCa</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>UniMuMo: Unified Text, Music and Motion Generation</title><source>arXiv.org</source><creator>Yang, Han ; Su, Kun ; Zhang, Yutong ; Chen, Jiaben ; Qian, Kaizhi ; Liu, Gaowen ; Gan, Chuang</creator><creatorcontrib>Yang, Han ; Su, Kun ; Zhang, Yutong ; Chen, Jiaben ; Qian, Kaizhi ; Liu, Gaowen ; Gan, Chuang</creatorcontrib><description>We introduce UniMuMo, a unified multimodal model capable of taking arbitrary
text, music, and motion data as input conditions to generate outputs across all
three modalities. To address the lack of time-synchronized data, we align
unpaired music and motion data based on rhythmic patterns to leverage existing
large-scale music-only and motion-only datasets. By converting music, motion,
and text into token-based representation, our model bridges these modalities
through a unified encoder-decoder transformer architecture. To support multiple
generation tasks within a single framework, we introduce several architectural
improvements. We propose encoding motion with a music codebook, mapping motion
into the same feature space as music. We introduce a music-motion parallel
generation scheme that unifies all music and motion generation tasks into a
single transformer decoder architecture with a single training task of
music-motion joint generation. Moreover, the model is designed by fine-tuning
existing pre-trained single-modality models, significantly reducing
computational demands. Extensive experiments demonstrate that UniMuMo achieves
competitive results on all unidirectional generation benchmarks across music,
motion, and text modalities. Quantitative results are available in the
\href{https://hanyangclarence.github.io/unimumo_demo/}{project page}.</description><identifier>DOI: 10.48550/arxiv.2410.04534</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Graphics ; Computer Science - Learning ; Computer Science - Multimedia ; Computer Science - Sound</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.04534$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.04534$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Yang, Han</creatorcontrib><creatorcontrib>Su, Kun</creatorcontrib><creatorcontrib>Zhang, Yutong</creatorcontrib><creatorcontrib>Chen, Jiaben</creatorcontrib><creatorcontrib>Qian, Kaizhi</creatorcontrib><creatorcontrib>Liu, Gaowen</creatorcontrib><creatorcontrib>Gan, Chuang</creatorcontrib><title>UniMuMo: Unified Text, Music and Motion Generation</title><description>We introduce UniMuMo, a unified multimodal model capable of taking arbitrary
text, music, and motion data as input conditions to generate outputs across all
three modalities. To address the lack of time-synchronized data, we align
unpaired music and motion data based on rhythmic patterns to leverage existing
large-scale music-only and motion-only datasets. By converting music, motion,
and text into token-based representation, our model bridges these modalities
through a unified encoder-decoder transformer architecture. To support multiple
generation tasks within a single framework, we introduce several architectural
improvements. We propose encoding motion with a music codebook, mapping motion
into the same feature space as music. We introduce a music-motion parallel
generation scheme that unifies all music and motion generation tasks into a
single transformer decoder architecture with a single training task of
music-motion joint generation. Moreover, the model is designed by fine-tuning
existing pre-trained single-modality models, significantly reducing
computational demands. Extensive experiments demonstrate that UniMuMo achieves
competitive results on all unidirectional generation benchmarks across music,
motion, and text modalities. Quantitative results are available in the
\href{https://hanyangclarence.github.io/unimumo_demo/}{project page}.</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Graphics</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Multimedia</subject><subject>Computer Science - Sound</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJiYGptwMhiF5mX6lvrmWykAGWmZqSkKIakVJToKvqXFmckKiXkpCr75JZn5eQruqXmpRYkgJg8Da1piTnEqL5TmZpB3cw1x9tAFmx5fUJSZm1hUGQ-yJR5sizFhFQBQETCa</recordid><startdate>20241006</startdate><enddate>20241006</enddate><creator>Yang, Han</creator><creator>Su, Kun</creator><creator>Zhang, Yutong</creator><creator>Chen, Jiaben</creator><creator>Qian, Kaizhi</creator><creator>Liu, Gaowen</creator><creator>Gan, Chuang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241006</creationdate><title>UniMuMo: Unified Text, Music and Motion Generation</title><author>Yang, Han ; Su, Kun ; Zhang, Yutong ; Chen, Jiaben ; Qian, Kaizhi ; Liu, Gaowen ; Gan, Chuang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_045343</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Graphics</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Multimedia</topic><topic>Computer Science - Sound</topic><toplevel>online_resources</toplevel><creatorcontrib>Yang, Han</creatorcontrib><creatorcontrib>Su, Kun</creatorcontrib><creatorcontrib>Zhang, Yutong</creatorcontrib><creatorcontrib>Chen, Jiaben</creatorcontrib><creatorcontrib>Qian, Kaizhi</creatorcontrib><creatorcontrib>Liu, Gaowen</creatorcontrib><creatorcontrib>Gan, Chuang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Yang, Han</au><au>Su, Kun</au><au>Zhang, Yutong</au><au>Chen, Jiaben</au><au>Qian, Kaizhi</au><au>Liu, Gaowen</au><au>Gan, Chuang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>UniMuMo: Unified Text, Music and Motion Generation</atitle><date>2024-10-06</date><risdate>2024</risdate><abstract>We introduce UniMuMo, a unified multimodal model capable of taking arbitrary
text, music, and motion data as input conditions to generate outputs across all
three modalities. To address the lack of time-synchronized data, we align
unpaired music and motion data based on rhythmic patterns to leverage existing
large-scale music-only and motion-only datasets. By converting music, motion,
and text into token-based representation, our model bridges these modalities
through a unified encoder-decoder transformer architecture. To support multiple
generation tasks within a single framework, we introduce several architectural
improvements. We propose encoding motion with a music codebook, mapping motion
into the same feature space as music. We introduce a music-motion parallel
generation scheme that unifies all music and motion generation tasks into a
single transformer decoder architecture with a single training task of
music-motion joint generation. Moreover, the model is designed by fine-tuning
existing pre-trained single-modality models, significantly reducing
computational demands. Extensive experiments demonstrate that UniMuMo achieves
competitive results on all unidirectional generation benchmarks across music,
motion, and text modalities. Quantitative results are available in the
\href{https://hanyangclarence.github.io/unimumo_demo/}{project page}.</abstract><doi>10.48550/arxiv.2410.04534</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2410.04534 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2410_04534 |
source | arXiv.org |
subjects | Computer Science - Computer Vision and Pattern Recognition Computer Science - Graphics Computer Science - Learning Computer Science - Multimedia Computer Science - Sound |
title | UniMuMo: Unified Text, Music and Motion Generation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T14%3A02%3A34IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=UniMuMo:%20Unified%20Text,%20Music%20and%20Motion%20Generation&rft.au=Yang,%20Han&rft.date=2024-10-06&rft_id=info:doi/10.48550/arxiv.2410.04534&rft_dat=%3Carxiv_GOX%3E2410_04534%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |