SymmNeRF: Learning to Explore Symmetry Prior for Single-View View Synthesis
We study the problem of novel view synthesis of objects from a single image. Existing methods have demonstrated the potential in single-view view synthesis. However, they still fail to recover the fine appearance details, especially in self-occluded areas. This is because a single view only provides...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We study the problem of novel view synthesis of objects from a single image.
Existing methods have demonstrated the potential in single-view view synthesis.
However, they still fail to recover the fine appearance details, especially in
self-occluded areas. This is because a single view only provides limited
information. We observe that manmade objects usually exhibit symmetric
appearances, which introduce additional prior knowledge. Motivated by this, we
investigate the potential performance gains of explicitly embedding symmetry
into the scene representation. In this paper, we propose SymmNeRF, a neural
radiance field (NeRF) based framework that combines local and global
conditioning under the introduction of symmetry priors. In particular, SymmNeRF
takes the pixel-aligned image features and the corresponding symmetric features
as extra inputs to the NeRF, whose parameters are generated by a hypernetwork.
As the parameters are conditioned on the image-encoded latent codes, SymmNeRF
is thus scene-independent and can generalize to new scenes. Experiments on
synthetic and real-world datasets show that SymmNeRF synthesizes novel views
with more details regardless of the pose transformation, and demonstrates good
generalization when applied to unseen objects. Code is available at:
https://github.com/xingyi-li/SymmNeRF. |
---|---|
DOI: | 10.48550/arxiv.2209.14819 |