On Scaling Up 3D Gaussian Splatting Training
3D Gaussian Splatting (3DGS) is increasingly popular for 3D reconstruction due to its superior visual quality and rendering speed. However, 3DGS training currently occurs on a single GPU, limiting its ability to handle high-resolution and large-scale 3D reconstruction tasks due to memory constraints...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 3D Gaussian Splatting (3DGS) is increasingly popular for 3D reconstruction
due to its superior visual quality and rendering speed. However, 3DGS training
currently occurs on a single GPU, limiting its ability to handle
high-resolution and large-scale 3D reconstruction tasks due to memory
constraints. We introduce Grendel, a distributed system designed to partition
3DGS parameters and parallelize computation across multiple GPUs. As each
Gaussian affects a small, dynamic subset of rendered pixels, Grendel employs
sparse all-to-all communication to transfer the necessary Gaussians to pixel
partitions and performs dynamic load balancing. Unlike existing 3DGS systems
that train using one camera view image at a time, Grendel supports batched
training with multiple views. We explore various optimization hyperparameter
scaling strategies and find that a simple sqrt(batch size) scaling rule is
highly effective. Evaluations using large-scale, high-resolution scenes show
that Grendel enhances rendering quality by scaling up 3DGS parameters across
multiple GPUs. On the Rubble dataset, we achieve a test PSNR of 27.28 by
distributing 40.4 million Gaussians across 16 GPUs, compared to a PSNR of 26.28
using 11.2 million Gaussians on a single GPU. Grendel is an open-source project
available at: https://github.com/nyu-systems/Grendel-GS |
---|---|
DOI: | 10.48550/arxiv.2406.18533 |