On Duality Gap as a Measure for Monitoring GAN Training
Generative adversarial network (GAN) is among the most popular deep learning models for learning complex data distributions. However, training a GAN is known to be a challenging task. This is often attributed to the lack of correlation between the training progress and the trajectory of the generato...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generative adversarial network (GAN) is among the most popular deep learning
models for learning complex data distributions. However, training a GAN is
known to be a challenging task. This is often attributed to the lack of
correlation between the training progress and the trajectory of the generator
and discriminator losses and the need for the GAN's subjective evaluation. A
recently proposed measure inspired by game theory - the duality gap, aims to
bridge this gap. However, as we demonstrate, the duality gap's capability
remains constrained due to limitations posed by its estimation process. This
paper presents a theoretical understanding of this limitation and proposes a
more dependable estimation process for the duality gap. At the crux of our
approach is the idea that local perturbations can help agents in a zero-sum
game escape non-Nash saddle points efficiently. Through exhaustive
experimentation across GAN models and datasets, we establish the efficacy of
our approach in capturing the GAN training progress with minimal increase to
the computational complexity. Further, we show that our estimate, with its
ability to identify model convergence/divergence, is a potential performance
measure that can be used to tune the hyperparameters of a GAN. |
---|---|
DOI: | 10.48550/arxiv.2012.06723 |