Segmentation Guided Image-to-Image Translation with Adversarial Networks
Recently image-to-image translation has received increasing attention, which aims to map images in one domain to another specific one. Existing methods mainly solve this task via a deep generative model, and focus on exploring the relationship between different domains. However, these methods neglec...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recently image-to-image translation has received increasing attention, which
aims to map images in one domain to another specific one. Existing methods
mainly solve this task via a deep generative model, and focus on exploring the
relationship between different domains. However, these methods neglect to
utilize higher-level and instance-specific information to guide the training
process, leading to a great deal of unrealistic generated images of low
quality. Existing methods also lack of spatial controllability during
translation. To address these challenge, we propose a novel Segmentation Guided
Generative Adversarial Networks (SGGAN), which leverages semantic segmentation
to further boost the generation performance and provide spatial mapping. In
particular, a segmentor network is designed to impose semantic information on
the generated images. Experimental results on multi-domain face image
translation task empirically demonstrate our ability of the spatial
modification and our superiority in image quality over several state-of-the-art
methods. |
---|---|
DOI: | 10.48550/arxiv.1901.01569 |