Investigating the effect of loss functions on single-image GAN performance
Loss functions are crucial in training generative adversarial networks (GANs) and shaping the resulting outputs. These functions, specifically designed for GANs, optimize generator and discriminator networks together but in opposite directions. GAN models, which typically handle large datasets, have...
Gespeichert in:
Veröffentlicht in: | Journal of innovative science and engineering 2024-08 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Loss functions are crucial in training generative adversarial networks (GANs) and shaping the resulting outputs. These functions, specifically designed for GANs, optimize generator and discriminator networks together but in opposite directions. GAN models, which typically handle large datasets, have been successful in the field of deep learning. However, exploring the factors that influence the success of GAN models developed for limited data problems is an important area of research. In this study, we conducted a comprehensive investigation into the loss functions commonly used in GAN literature, such as binary cross entropy (BCE), Wasserstein generative adversarial network (WGAN), least squares generative adversarial network (LSGAN), and hinge loss. Our research focused on examining the impact of these loss functions on improving output quality and ensuring training convergence in single-image GANs. Specifically, we evaluated the performance of a single-image GAN model, SinGAN, using these loss functions in terms of image quality and diversity. Our experimental results demonstrated that loss functions successfully produce high-quality, diverse images from a single training image. Additionally, we found that the WGAN-GP and LSGAN-GP loss functions are more effective for single-image GAN models. |
---|---|
ISSN: | 2602-4217 2602-4217 |
DOI: | 10.38088/jise.1497968 |