Human Annotations Improve GAN Performances
Generative Adversarial Networks (GANs) have shown great success in many applications. In this work, we present a novel method that leverages human annotations to improve the quality of generated images. Unlike previous paradigms that directly ask annotators to distinguish between real and fake data...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generative Adversarial Networks (GANs) have shown great success in many
applications. In this work, we present a novel method that leverages human
annotations to improve the quality of generated images. Unlike previous
paradigms that directly ask annotators to distinguish between real and fake
data in a straightforward way, we propose and annotate a set of carefully
designed attributes that encode important image information at various levels,
to understand the differences between fake and real images. Specifically, we
have collected an annotated dataset that contains 600 fake images and 400 real
images. These images are evaluated by 10 workers from the Amazon Mechanical
Turk (AMT) based on eight carefully defined attributes. Statistical analyses
have revealed different distributions of the proposed attributes between real
and fake images. These attributes are shown to be useful in discriminating fake
images from real ones, and deep neural networks are developed to automatically
predict the attributes. We further utilize the information by integrating the
attributes into GANs to generate better images. Experimental results evaluated
by multiple metrics show performance improvement of the proposed model. |
---|---|
DOI: | 10.48550/arxiv.1911.06460 |