ControlFace: Harnessing Facial Parametric Control for Face Rigging
Manipulation of facial images to meet specific controls such as pose, expression, and lighting, also known as face rigging, is a complex task in computer vision. Existing methods are limited by their reliance on image datasets, which necessitates individual-specific fine-tuning and limits their abil...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Manipulation of facial images to meet specific controls such as pose,
expression, and lighting, also known as face rigging, is a complex task in
computer vision. Existing methods are limited by their reliance on image
datasets, which necessitates individual-specific fine-tuning and limits their
ability to retain fine-grained identity and semantic details, reducing
practical usability. To overcome these limitations, we introduce ControlFace, a
novel face rigging method conditioned on 3DMM renderings that enables flexible,
high-fidelity control. We employ a dual-branch U-Nets: one, referred to as
FaceNet, captures identity and fine details, while the other focuses on
generation. To enhance control precision, the control mixer module encodes the
correlated features between the target-aligned control and reference-aligned
control, and a novel guidance method, reference control guidance, steers the
generation process for better control adherence. By training on a facial video
dataset, we fully utilize FaceNet's rich representations while ensuring control
adherence. Extensive experiments demonstrate ControlFace's superior performance
in identity preservation and control precision, highlighting its practicality.
Please see the project website: https://cvlab-kaist.github.io/ControlFace/. |
---|---|
DOI: | 10.48550/arxiv.2412.01160 |