MHGAN: Multi-Hierarchies Generative Adversarial Network for High-Quality Face Sketch Synthesis

Face sketch synthesis has made significant progress in the past few years. Recently, GAN-based methods have shown promising results on image-to-image translation problems, especially photo-to-sketch synthesis. Because the facial sketch has a hyper-abstract style and continuous graphic elements, comp...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2020, Vol.8, p.212995-213011
Hauptverfasser: Du, Kangning, Zhou, Huaqiang, Cao, Lin, Guo, Yanan, Wang, Tao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Face sketch synthesis has made significant progress in the past few years. Recently, GAN-based methods have shown promising results on image-to-image translation problems, especially photo-to-sketch synthesis. Because the facial sketch has a hyper-abstract style and continuous graphic elements, compared with other image styles, its local details are easier to expose small artifacts and blur. The existing face sketch synthesis methods lack models for specific facial regions and usually generate face sketches with coarse structures. To synthesis high-quality sketches and overcome the blurs and deformations, this paper proposes a novel Multi-Hierarchies GAN, which divides the face image into multiple hierarchical structures to learn different regions' features of the face. It includes three modules: a local region module, mask module, and fusion module. The local region module can learn the detailed features of different local regions of the face by GAN. The mask module can generate a coarse facial structure of a sketch and uses the facial feature extractor to enhance the high-level image and learn the latent spaces' feature. The fusion module can generate the final sketch by combining fine local regions and coarse facial structure. Extensive qualitative and quantitative experiments illustrate that the proposed method outperforms the state-of-the-art methods on the CUFS and CUFSF standard datasets and photos on the internet.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2020.3041284