Front2Back: Single View 3D Shape Reconstruction via Front to Back Prediction
Reconstruction of a 3D shape from a single 2D image is a classical computer vision problem, whose difficulty stems from the inherent ambiguity of recovering occluded or only partially observed surfaces. Recent methods address this challenge through the use of largely unstructured neural networks tha...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reconstruction of a 3D shape from a single 2D image is a classical computer
vision problem, whose difficulty stems from the inherent ambiguity of
recovering occluded or only partially observed surfaces. Recent methods address
this challenge through the use of largely unstructured neural networks that
effectively distill conditional mapping and priors over 3D shape. In this work,
we induce structure and geometric constraints by leveraging three core
observations: (1) the surface of most everyday objects is often almost entirely
exposed from pairs of typical opposite views; (2) everyday objects often
exhibit global reflective symmetries which can be accurately predicted from
single views; (3) opposite orthographic views of a 3D shape share consistent
silhouettes. Following these observations, we first predict orthographic 2.5D
visible surface maps (depth, normal and silhouette) from perspective 2D images,
and detect global reflective symmetries in this data; second, we predict the
back facing depth and normal maps using as input the front maps and, when
available, the symmetric reflections of these maps; and finally, we reconstruct
a 3D mesh from the union of these maps using a surface reconstruction method
best suited for this data. Our experiments demonstrate that our framework
outperforms state-of-the art approaches for 3D shape reconstructions from 2D
and 2.5D data in terms of input fidelity and details preservation.
Specifically, we achieve 12% better performance on average in ShapeNet
benchmark dataset, and up to 19% for certain classes of objects (e.g., chairs
and vessels). |
---|---|
DOI: | 10.48550/arxiv.1912.10589 |