One is All: Bridging the Gap Between Neural Radiance Fields Architectures with Progressive Volume Distillation
Neural Radiance Fields (NeRF) methods have proved effective as compact, high-quality and versatile representations for 3D scenes, and enable downstream tasks such as editing, retrieval, navigation, etc. Various neural architectures are vying for the core structure of NeRF, including the plain Multi-...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Neural Radiance Fields (NeRF) methods have proved effective as compact,
high-quality and versatile representations for 3D scenes, and enable downstream
tasks such as editing, retrieval, navigation, etc. Various neural architectures
are vying for the core structure of NeRF, including the plain Multi-Layer
Perceptron (MLP), sparse tensors, low-rank tensors, hashtables and their
compositions. Each of these representations has its particular set of
trade-offs. For example, the hashtable-based representations admit faster
training and rendering but their lack of clear geometric meaning hampers
downstream tasks like spatial-relation-aware editing. In this paper, we propose
Progressive Volume Distillation (PVD), a systematic distillation method that
allows any-to-any conversions between different architectures, including MLP,
sparse or low-rank tensors, hashtables and their compositions. PVD consequently
empowers downstream applications to optimally adapt the neural representations
for the task at hand in a post hoc fashion. The conversions are fast, as
distillation is progressively performed on different levels of volume
representations, from shallower to deeper. We also employ special treatment of
density to deal with its specific numerical instability problem. Empirical
evidence is presented to validate our method on the NeRF-Synthetic, LLFF and
TanksAndTemples datasets. For example, with PVD, an MLP-based NeRF model can be
distilled from a hashtable-based Instant-NGP model at a 10X~20X faster speed
than being trained the original NeRF from scratch, while achieving a superior
level of synthesis quality. Code is available at
https://github.com/megvii-research/AAAI2023-PVD. |
---|---|
DOI: | 10.48550/arxiv.2211.15977 |