StyleT2F: Generating Human Faces from Textual Description Using StyleGAN2
AI-driven image generation has improved significantly in recent years. Generative adversarial networks (GANs), like StyleGAN, are able to generate high-quality realistic data and have artistic control over the output, as well. In this work, we present StyleT2F, a method of controlling the output of...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | AI-driven image generation has improved significantly in recent years.
Generative adversarial networks (GANs), like StyleGAN, are able to generate
high-quality realistic data and have artistic control over the output, as well.
In this work, we present StyleT2F, a method of controlling the output of
StyleGAN2 using text, in order to be able to generate a detailed human face
from textual description. We utilize StyleGAN's latent space to manipulate
different facial features and conditionally sample the required latent code,
which embeds the facial features mentioned in the input text. Our method proves
to capture the required features correctly and shows consistency between the
input text and the output images. Moreover, our method guarantees
disentanglement on manipulating a wide range of facial features that
sufficiently describes a human face. |
---|---|
DOI: | 10.48550/arxiv.2204.07924 |