Single-view 3D Scene Reconstruction with High-fidelity Shape and Texture
Reconstructing detailed 3D scenes from single-view images remains a challenging task due to limitations in existing approaches, which primarily focus on geometric shape recovery, overlooking object appearances and fine shape details. To address these challenges, we propose a novel framework for simu...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reconstructing detailed 3D scenes from single-view images remains a
challenging task due to limitations in existing approaches, which primarily
focus on geometric shape recovery, overlooking object appearances and fine
shape details. To address these challenges, we propose a novel framework for
simultaneous high-fidelity recovery of object shapes and textures from
single-view images. Our approach utilizes the proposed Single-view neural
implicit Shape and Radiance field (SSR) representations to leverage both
explicit 3D shape supervision and volume rendering of color, depth, and surface
normal images. To overcome shape-appearance ambiguity under partial
observations, we introduce a two-stage learning curriculum incorporating both
3D and 2D supervisions. A distinctive feature of our framework is its ability
to generate fine-grained textured meshes while seamlessly integrating rendering
capabilities into the single-view 3D reconstruction model. This integration
enables not only improved textured 3D object reconstruction by 27.7% and 11.6%
on the 3D-FRONT and Pix3D datasets, respectively, but also supports the
rendering of images from novel viewpoints. Beyond individual objects, our
approach facilitates composing object-level representations into flexible scene
representations, thereby enabling applications such as holistic scene
understanding and 3D scene editing. We conduct extensive experiments to
demonstrate the effectiveness of our method. |
---|---|
DOI: | 10.48550/arxiv.2311.00457 |