GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views
Differentiable rendering techniques have recently shown promising results for free-viewpoint video synthesis of characters. However, such methods, either Gaussian Splatting or neural implicit rendering, typically necessitate per-subject optimization which does not meet the requirement of real-time r...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Differentiable rendering techniques have recently shown promising results for
free-viewpoint video synthesis of characters. However, such methods, either
Gaussian Splatting or neural implicit rendering, typically necessitate
per-subject optimization which does not meet the requirement of real-time
rendering in an interactive application. We propose a generalizable Gaussian
Splatting approach for high-resolution image rendering under a sparse-view
camera setting. To this end, we introduce Gaussian parameter maps defined on
the source views and directly regress Gaussian properties for instant novel
view synthesis without any fine-tuning or optimization. We train our Gaussian
parameter regression module on human-only data or human-scene data, jointly
with a depth estimation module to lift 2D parameter maps to 3D space. The
proposed framework is fully differentiable with both depth and rendering
supervision or with only rendering supervision. We further introduce a
regularization term and an epipolar attention mechanism to preserve geometry
consistency between two source views, especially when neglecting depth
supervision. Experiments on several datasets demonstrate that our method
outperforms state-of-the-art methods while achieving an exceeding rendering
speed. |
---|---|
DOI: | 10.48550/arxiv.2411.11363 |