ViewFusion: Learning Composable Diffusion Models for Novel View Synthesis
Deep learning is providing a wealth of new approaches to the old problem of novel view synthesis, from Neural Radiance Field (NeRF) based approaches to end-to-end style architectures. Each approach offers specific strengths but also comes with specific limitations in their applicability. This work i...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning is providing a wealth of new approaches to the old problem of
novel view synthesis, from Neural Radiance Field (NeRF) based approaches to
end-to-end style architectures. Each approach offers specific strengths but
also comes with specific limitations in their applicability. This work
introduces ViewFusion, a state-of-the-art end-to-end generative approach to
novel view synthesis with unparalleled flexibility. ViewFusion consists in
simultaneously applying a diffusion denoising step to any number of input views
of a scene, then combining the noise gradients obtained for each view with an
(inferred) pixel-weighting mask, ensuring that for each region of the target
scene only the most informative input views are taken into account. Our
approach resolves several limitations of previous approaches by (1) being
trainable and generalizing across multiple scenes and object classes, (2)
adaptively taking in a variable number of pose-free views at both train and
test time, (3) generating plausible views even in severely undetermined
conditions (thanks to its generative nature) -- all while generating views of
quality on par or even better than state-of-the-art methods. Limitations
include not generating a 3D embedding of the scene, resulting in a relatively
slow inference speed, and our method only being tested on the relatively small
dataset NMR. Code is available. |
---|---|
DOI: | 10.48550/arxiv.2402.02906 |