StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework

Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addi...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Huang, Yiheng, Yang, Hui, Luo, Chuanchen, Wang, Yuxi, Xu, Shibiao, Zhang, Zhaoxiang, Zhang, Man, Peng, Junran
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Huang, Yiheng
Yang, Hui
Luo, Chuanchen
Wang, Yuxi
Xu, Shibiao
Zhang, Zhaoxiang
Zhang, Man
Peng, Junran
description Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addition, the iterative denoising process consumes considerable computational overhead, which is prohibitive for real-time scenarios such as virtual characters and humanoid robots. For this reason, we first conduct a comprehensive investigation into network architectures, training strategies, and inference processs. Based on the profound analysis, we tailor each component for efficient high-quality human motion generation. Despite the promising performance, the tailored model still suffers from foot skating which is an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we identify foot-ground contact and correct foot motions along the denoising process. By organically combining these well-designed components together, we present StableMoFusion, a robust and efficient framework for human motion generation. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods. Project page: https://h-y1heng.github.io/StableMoFusion-page/
doi_str_mv 10.48550/arxiv.2405.05691
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2405_05691</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2405_05691</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-c6aa34adea58162aa23203a60d7844cc7d45acd6acb0a23c047b4436e9e93cb13</originalsourceid><addsrcrecordid>eNotj01OwzAUhL1hgQoHYIUvkGDHP0nYodIUpFZIkAW76Nl-lizaGNkphdtTUlYzoxmN9BFyw1kpG6XYHaTv8FVWkqmSKd3yS_L-NoHZ4TZ2hxzieE_7eITkMn2N5pAnCqOjK--DDThO9DF4P-8KAxkd3cbpFOgaR0ww2y7BHo8xfVyRCw-7jNf_uiB9t-qXT8XmZf28fNgUoGteWA0gJDgE1XBdAVSiYgI0c3UjpbW1kwqs02ANO3WWydpIKTS22ApruFiQ2_PtjDZ8prCH9DP8IQ4zovgFU15NMA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework</title><source>arXiv.org</source><creator>Huang, Yiheng ; Yang, Hui ; Luo, Chuanchen ; Wang, Yuxi ; Xu, Shibiao ; Zhang, Zhaoxiang ; Zhang, Man ; Peng, Junran</creator><creatorcontrib>Huang, Yiheng ; Yang, Hui ; Luo, Chuanchen ; Wang, Yuxi ; Xu, Shibiao ; Zhang, Zhaoxiang ; Zhang, Man ; Peng, Junran</creatorcontrib><description>Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addition, the iterative denoising process consumes considerable computational overhead, which is prohibitive for real-time scenarios such as virtual characters and humanoid robots. For this reason, we first conduct a comprehensive investigation into network architectures, training strategies, and inference processs. Based on the profound analysis, we tailor each component for efficient high-quality human motion generation. Despite the promising performance, the tailored model still suffers from foot skating which is an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we identify foot-ground contact and correct foot motions along the denoising process. By organically combining these well-designed components together, we present StableMoFusion, a robust and efficient framework for human motion generation. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods. Project page: https://h-y1heng.github.io/StableMoFusion-page/</description><identifier>DOI: 10.48550/arxiv.2405.05691</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Multimedia</subject><creationdate>2024-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2405.05691$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2405.05691$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Huang, Yiheng</creatorcontrib><creatorcontrib>Yang, Hui</creatorcontrib><creatorcontrib>Luo, Chuanchen</creatorcontrib><creatorcontrib>Wang, Yuxi</creatorcontrib><creatorcontrib>Xu, Shibiao</creatorcontrib><creatorcontrib>Zhang, Zhaoxiang</creatorcontrib><creatorcontrib>Zhang, Man</creatorcontrib><creatorcontrib>Peng, Junran</creatorcontrib><title>StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework</title><description>Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addition, the iterative denoising process consumes considerable computational overhead, which is prohibitive for real-time scenarios such as virtual characters and humanoid robots. For this reason, we first conduct a comprehensive investigation into network architectures, training strategies, and inference processs. Based on the profound analysis, we tailor each component for efficient high-quality human motion generation. Despite the promising performance, the tailored model still suffers from foot skating which is an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we identify foot-ground contact and correct foot motions along the denoising process. By organically combining these well-designed components together, we present StableMoFusion, a robust and efficient framework for human motion generation. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods. Project page: https://h-y1heng.github.io/StableMoFusion-page/</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Multimedia</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj01OwzAUhL1hgQoHYIUvkGDHP0nYodIUpFZIkAW76Nl-lizaGNkphdtTUlYzoxmN9BFyw1kpG6XYHaTv8FVWkqmSKd3yS_L-NoHZ4TZ2hxzieE_7eITkMn2N5pAnCqOjK--DDThO9DF4P-8KAxkd3cbpFOgaR0ww2y7BHo8xfVyRCw-7jNf_uiB9t-qXT8XmZf28fNgUoGteWA0gJDgE1XBdAVSiYgI0c3UjpbW1kwqs02ANO3WWydpIKTS22ApruFiQ2_PtjDZ8prCH9DP8IQ4zovgFU15NMA</recordid><startdate>20240509</startdate><enddate>20240509</enddate><creator>Huang, Yiheng</creator><creator>Yang, Hui</creator><creator>Luo, Chuanchen</creator><creator>Wang, Yuxi</creator><creator>Xu, Shibiao</creator><creator>Zhang, Zhaoxiang</creator><creator>Zhang, Man</creator><creator>Peng, Junran</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240509</creationdate><title>StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework</title><author>Huang, Yiheng ; Yang, Hui ; Luo, Chuanchen ; Wang, Yuxi ; Xu, Shibiao ; Zhang, Zhaoxiang ; Zhang, Man ; Peng, Junran</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-c6aa34adea58162aa23203a60d7844cc7d45acd6acb0a23c047b4436e9e93cb13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Multimedia</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Yiheng</creatorcontrib><creatorcontrib>Yang, Hui</creatorcontrib><creatorcontrib>Luo, Chuanchen</creatorcontrib><creatorcontrib>Wang, Yuxi</creatorcontrib><creatorcontrib>Xu, Shibiao</creatorcontrib><creatorcontrib>Zhang, Zhaoxiang</creatorcontrib><creatorcontrib>Zhang, Man</creatorcontrib><creatorcontrib>Peng, Junran</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Yiheng</au><au>Yang, Hui</au><au>Luo, Chuanchen</au><au>Wang, Yuxi</au><au>Xu, Shibiao</au><au>Zhang, Zhaoxiang</au><au>Zhang, Man</au><au>Peng, Junran</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework</atitle><date>2024-05-09</date><risdate>2024</risdate><abstract>Thanks to the powerful generative capacity of diffusion models, recent years have witnessed rapid progress in human motion generation. Existing diffusion-based methods employ disparate network architectures and training strategies. The effect of the design of each component is still unclear. In addition, the iterative denoising process consumes considerable computational overhead, which is prohibitive for real-time scenarios such as virtual characters and humanoid robots. For this reason, we first conduct a comprehensive investigation into network architectures, training strategies, and inference processs. Based on the profound analysis, we tailor each component for efficient high-quality human motion generation. Despite the promising performance, the tailored model still suffers from foot skating which is an ubiquitous issue in diffusion-based solutions. To eliminate footskate, we identify foot-ground contact and correct foot motions along the denoising process. By organically combining these well-designed components together, we present StableMoFusion, a robust and efficient framework for human motion generation. Extensive experimental results show that our StableMoFusion performs favorably against current state-of-the-art methods. Project page: https://h-y1heng.github.io/StableMoFusion-page/</abstract><doi>10.48550/arxiv.2405.05691</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2405.05691
ispartof
issn
language eng
recordid cdi_arxiv_primary_2405_05691
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Multimedia
title StableMoFusion: Towards Robust and Efficient Diffusion-based Motion Generation Framework
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-12T10%3A26%3A05IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=StableMoFusion:%20Towards%20Robust%20and%20Efficient%20Diffusion-based%20Motion%20Generation%20Framework&rft.au=Huang,%20Yiheng&rft.date=2024-05-09&rft_id=info:doi/10.48550/arxiv.2405.05691&rft_dat=%3Carxiv_GOX%3E2405_05691%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true