ColdGANs: Taming Language GANs with Cautious Sampling Strategies
Training regimes based on Maximum Likelihood Estimation (MLE) suffer from known limitations, often leading to poorly generated text sequences. At the root of these limitations is the mismatch between training and inference, i.e. the so-called exposure bias, exacerbated by considering only the refere...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training regimes based on Maximum Likelihood Estimation (MLE) suffer from
known limitations, often leading to poorly generated text sequences. At the
root of these limitations is the mismatch between training and inference, i.e.
the so-called exposure bias, exacerbated by considering only the reference
texts as correct, while in practice several alternative formulations could be
as good. Generative Adversarial Networks (GANs) can mitigate those limitations
but the discrete nature of text has hindered their application to language
generation: the approaches proposed so far, based on Reinforcement Learning,
have been shown to underperform MLE. Departing from previous works, we analyze
the exploration step in GANs applied to text generation, and show how classical
sampling results in unstable training. We propose to consider alternative
exploration strategies in a GAN framework that we name ColdGANs, where we force
the sampling to be close to the distribution modes to get smoother learning
dynamics. For the first time, to the best of our knowledge, the proposed
language GANs compare favorably to MLE, and obtain improvements over the
state-of-the-art on three generative tasks, namely unconditional text
generation, question generation, and abstractive summarization. |
---|---|
DOI: | 10.48550/arxiv.2006.04643 |