MVHuman: Tailoring 2D Diffusion with Multi-view Sampling For Realistic 3D Human Generation
Recent months have witnessed rapid progress in 3D generation based on diffusion models. Most advances require fine-tuning existing 2D Stable Diffsuions into multi-view settings or tedious distilling operations and hence fall short of 3D human generation due to the lack of diverse 3D human datasets....
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent months have witnessed rapid progress in 3D generation based on
diffusion models. Most advances require fine-tuning existing 2D Stable
Diffsuions into multi-view settings or tedious distilling operations and hence
fall short of 3D human generation due to the lack of diverse 3D human datasets.
We present an alternative scheme named MVHuman to generate human radiance
fields from text guidance, with consistent multi-view images directly sampled
from pre-trained Stable Diffsuions without any fine-tuning or distilling. Our
core is a multi-view sampling strategy to tailor the denoising processes of the
pre-trained network for generating consistent multi-view images. It encompasses
view-consistent conditioning, replacing the original noises with
``consistency-guided noises'', optimizing latent codes, as well as utilizing
cross-view attention layers. With the multi-view images through the sampling
process, we adopt geometry refinement and 3D radiance field generation followed
by a subsequent neural blending scheme for free-view rendering. Extensive
experiments demonstrate the efficacy of our method, as well as its superiority
to state-of-the-art 3D human generation methods. |
---|---|
DOI: | 10.48550/arxiv.2312.10120 |