Human Gaussian Splatting: Real-time Rendering of Animatable Avatars
This work addresses the problem of real-time rendering of photorealistic human body avatars learned from multi-view videos. While the classical approaches to model and render virtual humans generally use a textured mesh, recent research has developed neural body representations that achieve impressi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This work addresses the problem of real-time rendering of photorealistic
human body avatars learned from multi-view videos. While the classical
approaches to model and render virtual humans generally use a textured mesh,
recent research has developed neural body representations that achieve
impressive visual quality. However, these models are difficult to render in
real-time and their quality degrades when the character is animated with body
poses different than the training observations. We propose an animatable human
model based on 3D Gaussian Splatting, that has recently emerged as a very
efficient alternative to neural radiance fields. The body is represented by a
set of gaussian primitives in a canonical space which is deformed with a coarse
to fine approach that combines forward skinning and local non-rigid refinement.
We describe how to learn our Human Gaussian Splatting (HuGS) model in an
end-to-end fashion from multi-view observations, and evaluate it against the
state-of-the-art approaches for novel pose synthesis of clothed body. Our
method achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4
dataset while being able to render in real-time (80 fps for 512x512
resolution). |
---|---|
DOI: | 10.48550/arxiv.2311.17113 |