GSD: View-Guided Gaussian Splatting Diffusion for 3D Reconstruction
We present GSD, a diffusion model approach based on Gaussian Splatting (GS) representation for 3D object reconstruction from a single view. Prior works suffer from inconsistent 3D geometry or mediocre rendering quality due to improper representations. We take a step towards resolving these shortcomi...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present GSD, a diffusion model approach based on Gaussian Splatting (GS)
representation for 3D object reconstruction from a single view. Prior works
suffer from inconsistent 3D geometry or mediocre rendering quality due to
improper representations. We take a step towards resolving these shortcomings
by utilizing the recent state-of-the-art 3D explicit representation, Gaussian
Splatting, and an unconditional diffusion model. This model learns to generate
3D objects represented by sets of GS ellipsoids. With these strong generative
3D priors, though learning unconditionally, the diffusion model is ready for
view-guided reconstruction without further model fine-tuning. This is achieved
by propagating fine-grained 2D features through the efficient yet flexible
splatting function and the guided denoising sampling process. In addition, a 2D
diffusion model is further employed to enhance rendering fidelity, and improve
reconstructed GS quality by polishing and re-using the rendered images. The
final reconstructed objects explicitly come with high-quality 3D structure and
texture, and can be efficiently rendered in arbitrary views. Experiments on the
challenging real-world CO3D dataset demonstrate the superiority of our
approach. Project page: https://yxmu.foo/GSD/ |
---|---|
DOI: | 10.48550/arxiv.2407.04237 |