Some Theoretical Insights into Wasserstein GANs
Journal of Machine Learning Research, Microtome Publishing, 2021 Generative Adversarial Networks (GANs) have been successful in producing outstanding results in areas as diverse as image, video, and text generation. Building on these successes, a large number of empirical studies have validated the...
Gespeichert in:
Veröffentlicht in: | Journal of machine learning research 2021-01 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Journal of Machine Learning Research, Microtome Publishing, 2021 Generative Adversarial Networks (GANs) have been successful in producing
outstanding results in areas as diverse as image, video, and text generation.
Building on these successes, a large number of empirical studies have validated
the benefits of the cousin approach called Wasserstein GANs (WGANs), which
brings stabilization in the training process. In the present paper, we add a
new stone to the edifice by proposing some theoretical advances in the
properties of WGANs. First, we properly define the architecture of WGANs in the
context of integral probability metrics parameterized by neural networks and
highlight some of their basic mathematical features. We stress in particular
interesting optimization properties arising from the use of a parametric
1-Lipschitz discriminator. Then, in a statistically-driven approach, we study
the convergence of empirical WGANs as the sample size tends to infinity, and
clarify the adversarial effects of the generator and the discriminator by
underlining some trade-off properties. These features are finally illustrated
with experiments using both synthetic and real-world datasets. |
---|---|
ISSN: | 1532-4435 1533-7928 |
DOI: | 10.48550/arxiv.2006.02682 |