Free View Synthesis
We present a method for novel view synthesis from input images that are freely distributed around a scene. Our method does not rely on a regular arrangement of input views, can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric lay...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a method for novel view synthesis from input images that are
freely distributed around a scene. Our method does not rely on a regular
arrangement of input views, can synthesize images for free camera movement
through the scene, and works for general scenes with unconstrained geometric
layouts. We calibrate the input images via SfM and erect a coarse geometric
scaffold via MVS. This scaffold is used to create a proxy depth map for a novel
view of the scene. Based on this depth map, a recurrent encoder-decoder network
processes reprojected features from nearby views and synthesizes the new view.
Our network does not need to be optimized for a given scene. After training on
a dataset, it works in previously unseen environments with no fine-tuning or
per-scene optimization. We evaluate the presented approach on challenging
real-world datasets, including Tanks and Temples, where we demonstrate
successful view synthesis for the first time and substantially outperform prior
and concurrent work. |
---|---|
DOI: | 10.48550/arxiv.2008.05511 |