DRiVE: Diffusion-based Rigging Empowers Generation of Versatile and Expressive Characters
Recent advances in generative models have enabled high-quality 3D character reconstruction from multi-modal. However, animating these generated characters remains a challenging task, especially for complex elements like garments and hair, due to the lack of large-scale datasets and effective rigging...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in generative models have enabled high-quality 3D character
reconstruction from multi-modal. However, animating these generated characters
remains a challenging task, especially for complex elements like garments and
hair, due to the lack of large-scale datasets and effective rigging methods. To
address this gap, we curate AnimeRig, a large-scale dataset with detailed
skeleton and skinning annotations. Building upon this, we propose DRiVE, a
novel framework for generating and rigging 3D human characters with intricate
structures. Unlike existing methods, DRiVE utilizes a 3D Gaussian
representation, facilitating efficient animation and high-quality rendering. We
further introduce GSDiff, a 3D Gaussian-based diffusion module that predicts
joint positions as spatial distributions, overcoming the limitations of
regression-based approaches. Extensive experiments demonstrate that DRiVE
achieves precise rigging results, enabling realistic dynamics for clothing and
hair, and surpassing previous methods in both quality and versatility. The code
and dataset will be made public for academic use upon acceptance. |
---|---|
DOI: | 10.48550/arxiv.2411.17423 |