Going beyond free viewpoint: creating animatable volumetric video of human performances
An end-to-end pipeline for the creation of high-quality animatable volumetric video of human performances is presented. Going beyond the application of free-viewpoint video, the authors allow re-animation and alteration of an actor's performance through the enrichment of the captured data with...
Gespeichert in:
Veröffentlicht in: | IET computer vision 2020-09, Vol.14 (6), p.350-358 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | An end-to-end pipeline for the creation of high-quality animatable volumetric video of human performances is presented. Going beyond the application of free-viewpoint video, the authors allow re-animation and alteration of an actor's performance through the enrichment of the captured data with semantics and animation properties. Hybrid geometry- and video-based animation methods are applied that allow a direct animation of the high-quality data itself instead of creating a CG model that resembles the captured data. Semantic enrichment and animation are achieved by establishing temporal consistency followed by automatic rigging of each 3D frame using a parametric human body model. The hybrid approach combines the flexibility of classical CG animation with the realism of real captured data. For the face, coarse movements are modelled in the geometry only, while very fine and subtle details, often lacking in purely geometric methods, are captured in video textures, which can interactively be combined to form new facial expressions. On top of that, regions that are challenging to synthesise, such as the teeth or the eyes, are learned and filled in realistically in an autoencoder-based approach. This study covers the full pipeline from capturing, volumetric video production, and enrichment with semantics for the final hybrid animation. |
---|---|
ISSN: | 1751-9632 1751-9640 1751-9640 |
DOI: | 10.1049/iet-cvi.2019.0786 |