VividTalk: One-Shot Audio-Driven Talking Head Generation Based on 3D Hybrid Prior
Audio-driven talking head generation has drawn much attention in recent years, and many efforts have been made in lip-sync, expressive facial expressions, natural head pose generation, and high video quality. However, no model has yet led or tied on all these metrics due to the one-to-many mapping b...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Audio-driven talking head generation has drawn much attention in recent
years, and many efforts have been made in lip-sync, expressive facial
expressions, natural head pose generation, and high video quality. However, no
model has yet led or tied on all these metrics due to the one-to-many mapping
between audio and motion. In this paper, we propose VividTalk, a two-stage
generic framework that supports generating high-visual quality talking head
videos with all the above properties. Specifically, in the first stage, we map
the audio to mesh by learning two motions, including non-rigid expression
motion and rigid head motion. For expression motion, both blendshape and vertex
are adopted as the intermediate representation to maximize the representation
ability of the model. For natural head motion, a novel learnable head pose
codebook with a two-phase training mechanism is proposed. In the second stage,
we proposed a dual branch motion-vae and a generator to transform the meshes
into dense motion and synthesize high-quality video frame-by-frame. Extensive
experiments show that the proposed VividTalk can generate high-visual quality
talking head videos with lip-sync and realistic enhanced by a large margin, and
outperforms previous state-of-the-art works in objective and subjective
comparisons. |
---|---|
DOI: | 10.48550/arxiv.2312.01841 |