G3DST: Generalizing 3D Style Transfer with Neural Radiance Fields across Scenes and Styles
Neural Radiance Fields (NeRF) have emerged as a powerful tool for creating highly detailed and photorealistic scenes. Existing methods for NeRF-based 3D style transfer need extensive per-scene optimization for single or multiple styles, limiting the applicability and efficiency of 3D style transfer....
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural Radiance Fields (NeRF) have emerged as a powerful tool for creating
highly detailed and photorealistic scenes. Existing methods for NeRF-based 3D
style transfer need extensive per-scene optimization for single or multiple
styles, limiting the applicability and efficiency of 3D style transfer. In this
work, we overcome the limitations of existing methods by rendering stylized
novel views from a NeRF without the need for per-scene or per-style
optimization. To this end, we take advantage of a generalizable NeRF model to
facilitate style transfer in 3D, thereby enabling the use of a single learned
model across various scenes. By incorporating a hypernetwork into a
generalizable NeRF, our approach enables on-the-fly generation of stylized
novel views. Moreover, we introduce a novel flow-based multi-view consistency
loss to preserve consistency across multiple views. We evaluate our method
across various scenes and artistic styles and show its performance in
generating high-quality and multi-view consistent stylized images without the
need for a scene-specific implicit model. Our findings demonstrate that this
approach not only achieves a good visual quality comparable to that of
per-scene methods but also significantly enhances efficiency and applicability,
marking a notable advancement in the field of 3D style transfer. |
---|---|
DOI: | 10.48550/arxiv.2408.13508 |