AnimateMe: 4D Facial Expressions via Diffusion Models
The field of photorealistic 3D avatar reconstruction and generation has garnered significant attention in recent years; however, animating such avatars remains challenging. Recent advances in diffusion models have notably enhanced the capabilities of generative models in 2D animation. In this work,...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The field of photorealistic 3D avatar reconstruction and generation has
garnered significant attention in recent years; however, animating such avatars
remains challenging. Recent advances in diffusion models have notably enhanced
the capabilities of generative models in 2D animation. In this work, we
directly utilize these models within the 3D domain to achieve controllable and
high-fidelity 4D facial animation. By integrating the strengths of diffusion
processes and geometric deep learning, we employ Graph Neural Networks (GNNs)
as denoising diffusion models in a novel approach, formulating the diffusion
process directly on the mesh space and enabling the generation of 3D facial
expressions. This facilitates the generation of facial deformations through a
mesh-diffusion-based model. Additionally, to ensure temporal coherence in our
animations, we propose a consistent noise sampling method. Under a series of
both quantitative and qualitative experiments, we showcase that the proposed
method outperforms prior work in 4D expression synthesis by generating
high-fidelity extreme expressions. Furthermore, we applied our method to
textured 4D facial expression generation, implementing a straightforward
extension that involves training on a large-scale textured 4D facial expression
database. |
---|---|
DOI: | 10.48550/arxiv.2403.17213 |