SegTalker: Segmentation-based Talking Face Generation with Mask-guided Local Editing
Audio-driven talking face generation aims to synthesize video with lip movements synchronized to input audio. However, current generative techniques face challenges in preserving intricate regional textures (skin, teeth). To address the aforementioned challenges, we propose a novel framework called...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Audio-driven talking face generation aims to synthesize video with lip
movements synchronized to input audio. However, current generative techniques
face challenges in preserving intricate regional textures (skin, teeth). To
address the aforementioned challenges, we propose a novel framework called
SegTalker to decouple lip movements and image textures by introducing
segmentation as intermediate representation. Specifically, given the mask of
image employed by a parsing network, we first leverage the speech to drive the
mask and generate talking segmentation. Then we disentangle semantic regions of
image into style codes using a mask-guided encoder. Ultimately, we inject the
previously generated talking segmentation and style codes into a mask-guided
StyleGAN to synthesize video frame. In this way, most of textures are fully
preserved. Moreover, our approach can inherently achieve background separation
and facilitate mask-guided facial local editing. In particular, by editing the
mask and swapping the region textures from a given reference image (e.g. hair,
lip, eyebrows), our approach enables facial editing seamlessly when generating
talking face video. Experiments demonstrate that our proposed approach can
effectively preserve texture details and generate temporally consistent video
while remaining competitive in lip synchronization. Quantitative and
qualitative results on the HDTF and MEAD datasets illustrate the superior
performance of our method over existing methods. |
---|---|
DOI: | 10.48550/arxiv.2409.03605 |