GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping
Generating novel views from a single image remains a challenging task due to the complexity of 3D scenes and the limited diversity in the existing multi-view datasets to train a model on. Recent research combining large-scale text-to-image (T2I) models with monocular depth estimation (MDE) has shown...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generating novel views from a single image remains a challenging task due to
the complexity of 3D scenes and the limited diversity in the existing
multi-view datasets to train a model on. Recent research combining large-scale
text-to-image (T2I) models with monocular depth estimation (MDE) has shown
promise in handling in-the-wild images. In these methods, an input view is
geometrically warped to novel views with estimated depth maps, then the warped
image is inpainted by T2I models. However, they struggle with noisy depth maps
and loss of semantic details when warping an input view to novel viewpoints. In
this paper, we propose a novel approach for single-shot novel view synthesis, a
semantic-preserving generative warping framework that enables T2I generative
models to learn where to warp and where to generate, through augmenting
cross-view attention with self-attention. Our approach addresses the
limitations of existing methods by conditioning the generative model on source
view images and incorporating geometric warping signals. Qualitative and
quantitative evaluations demonstrate that our model outperforms existing
methods in both in-domain and out-of-domain scenarios. Project page is
available at https://GenWarp-NVS.github.io/. |
---|---|
DOI: | 10.48550/arxiv.2405.17251 |