Training language GANs from Scratch
Generative Adversarial Networks (GANs) enjoy great success at image generation, but have proven difficult to train in the domain of natural language. Challenges with gradient estimation, optimization instability, and mode collapse have lead practitioners to resort to maximum likelihood pre-training,...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generative Adversarial Networks (GANs) enjoy great success at image
generation, but have proven difficult to train in the domain of natural
language. Challenges with gradient estimation, optimization instability, and
mode collapse have lead practitioners to resort to maximum likelihood
pre-training, followed by small amounts of adversarial fine-tuning. The
benefits of GAN fine-tuning for language generation are unclear, as the
resulting models produce comparable or worse samples than traditional language
models. We show it is in fact possible to train a language GAN from scratch --
without maximum likelihood pre-training. We combine existing techniques such as
large batch sizes, dense rewards and discriminator regularization to stabilize
and improve language GANs. The resulting model, ScratchGAN, performs comparably
to maximum likelihood training on EMNLP2017 News and WikiText-103 corpora
according to quality and diversity metrics. |
---|---|
DOI: | 10.48550/arxiv.1905.09922 |