LetsTalk: Latent Diffusion Transformer for Talking Video Synthesis
Portrait image animation using audio has rapidly advanced, enabling the creation of increasingly realistic and expressive animated faces. The challenges of this multimodality-guided video generation task involve fusing various modalities while ensuring consistency in timing and portrait. We further...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Portrait image animation using audio has rapidly advanced, enabling the
creation of increasingly realistic and expressive animated faces. The
challenges of this multimodality-guided video generation task involve fusing
various modalities while ensuring consistency in timing and portrait. We
further seek to produce vivid talking heads. To address these challenges, we
present LetsTalk (LatEnt Diffusion TranSformer for Talking Video Synthesis), a
diffusion transformer that incorporates modular temporal and spatial attention
mechanisms to merge multimodality and enhance spatial-temporal consistency. To
handle multimodal conditions, we first summarize three fusion schemes, ranging
from shallow to deep fusion compactness, and thoroughly explore their impact
and applicability. Then we propose a suitable solution according to the
modality differences of image, audio, and video generation. For portrait, we
utilize a deep fusion scheme (Symbiotic Fusion) to ensure portrait consistency.
For audio, we implement a shallow fusion scheme (Direct Fusion) to achieve
audio-animation alignment while preserving diversity. Our extensive experiments
demonstrate that our approach generates temporally coherent and realistic
videos with enhanced diversity and liveliness. |
---|---|
DOI: | 10.48550/arxiv.2411.16748 |