Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis. While NeRF-based approaches are effective for novel view synthesis, such models memorize the radiance for every point in a scene within a neural network. Since the...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a novel method for performing flexible, 3D-aware image content
manipulation while enabling high-quality novel view synthesis. While NeRF-based
approaches are effective for novel view synthesis, such models memorize the
radiance for every point in a scene within a neural network. Since these models
are scene-specific and lack a 3D scene representation, classical editing such
as shape manipulation, or combining scenes is not possible. Hence, editing and
combining NeRF-based scenes has not been demonstrated. With the aim of
obtaining interpretable and controllable scene representations, our model
couples learnt scene-specific feature volumes with a scene agnostic neural
rendering network. With this hybrid representation, we decouple neural
rendering from scene-specific geometry and appearance. We can generalize to
novel scenes by optimizing only the scene-specific 3D feature representation,
while keeping the parameters of the rendering network fixed. The rendering
function learnt during the initial training stage can thus be easily applied to
new scenes, making our approach more flexible. More importantly, since the
feature volumes are independent of the rendering model, we can manipulate and
combine scenes by editing their corresponding feature volumes. The edited
volume can then be plugged into the rendering model to synthesize high-quality
novel views. We demonstrate various scene manipulations, including mixing
scenes, deforming objects and inserting objects into scenes, while still
producing photo-realistic results. |
---|---|
DOI: | 10.48550/arxiv.2204.10850 |