Unsupervised Discovery of Interpretable Directions in the GAN Latent Space
The latent spaces of GAN models often have semantically meaningful directions. Moving in these directions corresponds to human-interpretable image transformations, such as zooming or recoloring, enabling a more controllable generation process. However, the discovery of such directions is currently p...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The latent spaces of GAN models often have semantically meaningful
directions. Moving in these directions corresponds to human-interpretable image
transformations, such as zooming or recoloring, enabling a more controllable
generation process. However, the discovery of such directions is currently
performed in a supervised manner, requiring human labels, pretrained models, or
some form of self-supervision. These requirements severely restrict a range of
directions existing approaches can discover. In this paper, we introduce an
unsupervised method to identify interpretable directions in the latent space of
a pretrained GAN model. By a simple model-agnostic procedure, we find
directions corresponding to sensible semantic manipulations without any form of
(self-)supervision. Furthermore, we reveal several non-trivial findings, which
would be difficult to obtain by existing methods, e.g., a direction
corresponding to background removal. As an immediate practical benefit of our
work, we show how to exploit this finding to achieve competitive performance
for weakly-supervised saliency detection. |
---|---|
DOI: | 10.48550/arxiv.2002.03754 |