Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation
We present a generic image-to-image translation framework, pixel2style2pixel (pSp). Our pSp framework is based on a novel encoder network that directly generates a series of style vectors which are fed into a pretrained StyleGAN generator, forming the extended W+ latent space. We first show that our...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We present a generic image-to-image translation framework, pixel2style2pixel
(pSp). Our pSp framework is based on a novel encoder network that directly
generates a series of style vectors which are fed into a pretrained StyleGAN
generator, forming the extended W+ latent space. We first show that our encoder
can directly embed real images into W+, with no additional optimization. Next,
we propose utilizing our encoder to directly solve image-to-image translation
tasks, defining them as encoding problems from some input domain into the
latent domain. By deviating from the standard invert first, edit later
methodology used with previous StyleGAN encoders, our approach can handle a
variety of tasks even when the input image is not represented in the StyleGAN
domain. We show that solving translation tasks through StyleGAN significantly
simplifies the training process, as no adversary is required, has better
support for solving tasks without pixel-to-pixel correspondence, and inherently
supports multi-modal synthesis via the resampling of styles. Finally, we
demonstrate the potential of our framework on a variety of facial
image-to-image translation tasks, even when compared to state-of-the-art
solutions designed specifically for a single task, and further show that it can
be extended beyond the human facial domain. |
---|---|
DOI: | 10.48550/arxiv.2008.00951 |