GL-NeRF: Gauss-Laguerre Quadrature Enables Training-Free NeRF Acceleration
Volume rendering in neural radiance fields is inherently time-consuming due to the large number of MLP calls on the points sampled per ray. Previous works would address this issue by introducing new neural networks or data structures. In this work, We propose GL-NeRF, a new perspective of computing...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Volume rendering in neural radiance fields is inherently time-consuming due
to the large number of MLP calls on the points sampled per ray. Previous works
would address this issue by introducing new neural networks or data structures.
In this work, We propose GL-NeRF, a new perspective of computing volume
rendering with the Gauss-Laguerre quadrature. GL-NeRF significantly reduces the
number of MLP calls needed for volume rendering, introducing no additional data
structures or neural networks. The simple formulation makes adopting GL-NeRF in
any NeRF model possible. In the paper, we first justify the use of the
Gauss-Laguerre quadrature and then demonstrate this plug-and-play attribute by
implementing it in two different NeRF models. We show that with a minimal drop
in performance, GL-NeRF can significantly reduce the number of MLP calls,
showing the potential to speed up any NeRF model. |
---|---|
DOI: | 10.48550/arxiv.2410.19831 |