SimVS: Simulating World Inconsistencies for Robust View Synthesis
Novel-view synthesis techniques achieve impressive results for static scenes but struggle when faced with the inconsistencies inherent to casual capture settings: varying illumination, scene motion, and other unintended effects that are difficult to model explicitly. We present an approach for lever...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Novel-view synthesis techniques achieve impressive results for static scenes
but struggle when faced with the inconsistencies inherent to casual capture
settings: varying illumination, scene motion, and other unintended effects that
are difficult to model explicitly. We present an approach for leveraging
generative video models to simulate the inconsistencies in the world that can
occur during capture. We use this process, along with existing multi-view
datasets, to create synthetic data for training a multi-view harmonization
network that is able to reconcile inconsistent observations into a consistent
3D scene. We demonstrate that our world-simulation strategy significantly
outperforms traditional augmentation methods in handling real-world scene
variations, thereby enabling highly accurate static 3D reconstructions in the
presence of a variety of challenging inconsistencies. Project page:
https://alextrevithick.github.io/simvs |
---|---|
DOI: | 10.48550/arxiv.2412.07696 |