Efficient NeRF Optimization -- Not All Samples Remain Equally Hard
We propose an application of online hard sample mining for efficient training of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality for many 3D reconstruction and rendering tasks but require substantial computational resources. The encoding of the scene information within th...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We propose an application of online hard sample mining for efficient training
of Neural Radiance Fields (NeRF). NeRF models produce state-of-the-art quality
for many 3D reconstruction and rendering tasks but require substantial
computational resources. The encoding of the scene information within the NeRF
network parameters necessitates stochastic sampling. We observe that during the
training, a major part of the compute time and memory usage is spent on
processing already learnt samples, which no longer affect the model update
significantly. We identify the backward pass on the stochastic samples as the
computational bottleneck during the optimization. We thus perform the first
forward pass in inference mode as a relatively low-cost search for hard
samples. This is followed by building the computational graph and updating the
NeRF network parameters using only the hard samples. To demonstrate the
effectiveness of the proposed approach, we apply our method to Instant-NGP,
resulting in significant improvements of the view-synthesis quality over the
baseline (1 dB improvement on average per training time, or 2x speedup to reach
the same PSNR level) along with approx. 40% memory savings coming from using
only the hard samples to build the computational graph. As our method only
interfaces with the network module, we expect it to be widely applicable. |
---|---|
DOI: | 10.48550/arxiv.2408.03193 |