ActorsNeRF: Animatable Few-shot Human Rendering with Generalizable NeRFs
While NeRF-based human representations have shown impressive novel view synthesis results, most methods still rely on a large number of images / views for training. In this work, we propose a novel animatable NeRF called ActorsNeRF. It is first pre-trained on diverse human subjects, and then adapted...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While NeRF-based human representations have shown impressive novel view
synthesis results, most methods still rely on a large number of images / views
for training. In this work, we propose a novel animatable NeRF called
ActorsNeRF. It is first pre-trained on diverse human subjects, and then adapted
with few-shot monocular video frames for a new actor with unseen poses.
Building on previous generalizable NeRFs with parameter sharing using a ConvNet
encoder, ActorsNeRF further adopts two human priors to capture the large human
appearance, shape, and pose variations. Specifically, in the encoded feature
space, we will first align different human subjects in a category-level
canonical space, and then align the same human from different frames in an
instance-level canonical space for rendering. We quantitatively and
qualitatively demonstrate that ActorsNeRF significantly outperforms the
existing state-of-the-art on few-shot generalization to new people and poses on
multiple datasets. Project Page: https://jitengmu.github.io/ActorsNeRF/ |
---|---|
DOI: | 10.48550/arxiv.2304.14401 |