BeautyREC: Robust, Efficient, and Content-preserving Makeup Transfer
In this work, we propose a Robust, Efficient, and Component-specific makeup transfer method (abbreviated as BeautyREC). A unique departure from prior methods that leverage global attention, simply concatenate features, or implicitly manipulate features in latent space, we propose a component-specifi...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In this work, we propose a Robust, Efficient, and Component-specific makeup
transfer method (abbreviated as BeautyREC). A unique departure from prior
methods that leverage global attention, simply concatenate features, or
implicitly manipulate features in latent space, we propose a component-specific
correspondence to directly transfer the makeup style of a reference image to
the corresponding components (e.g., skin, lips, eyes) of a source image, making
elaborate and accurate local makeup transfer. As an auxiliary, the long-range
visual dependencies of Transformer are introduced for effective global makeup
transfer. Instead of the commonly used cycle structure that is complex and
unstable, we employ a content consistency loss coupled with a content encoder
to implement efficient single-path makeup transfer. The key insights of this
study are modeling component-specific correspondence for local makeup transfer,
capturing long-range dependencies for global makeup transfer, and enabling
efficient makeup transfer via a single-path structure. We also contribute
BeautyFace, a makeup transfer dataset to supplement existing datasets. This
dataset contains 3,000 faces, covering more diverse makeup styles, face poses,
and races. Each face has annotated parsing map. Extensive experiments
demonstrate the effectiveness of our method against state-of-the-art methods.
Besides, our method is appealing as it is with only 1M parameters,
outperforming the state-of-the-art methods (BeautyGAN: 8.43M, PSGAN: 12.62M,
SCGAN: 15.30M, CPM: 9.24M, SSAT: 10.48M). |
---|---|
DOI: | 10.48550/arxiv.2212.05855 |