Language GANs Falling Short
ICLR 2020 - Proceedings of the Seventh International Conference on Learning Representation Generating high-quality text with sufficient diversity is essential for a wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE) models trained with teacher forcing have consistently b...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | ICLR 2020 - Proceedings of the Seventh International Conference on
Learning Representation Generating high-quality text with sufficient diversity is essential for a
wide range of Natural Language Generation (NLG) tasks. Maximum-Likelihood (MLE)
models trained with teacher forcing have consistently been reported as weak
baselines, where poor performance is attributed to exposure bias (Bengio et
al., 2015; Ranzato et al., 2015); at inference time, the model is fed its own
prediction instead of a ground-truth token, which can lead to accumulating
errors and poor samples. This line of reasoning has led to an outbreak of
adversarial based approaches for NLG, on the account that GANs do not suffer
from exposure bias. In this work, we make several surprising observations which
contradict common beliefs. First, we revisit the canonical evaluation framework
for NLG, and point out fundamental flaws with quality-only evaluation: we show
that one can outperform such metrics using a simple, well-known temperature
parameter to artificially reduce the entropy of the model's conditional
distributions. Second, we leverage the control over the quality / diversity
trade-off given by this parameter to evaluate models over the whole
quality-diversity spectrum and find MLE models constantly outperform the
proposed GAN variants over the whole quality-diversity space. Our results have
several implications: 1) The impact of exposure bias on sample quality is less
severe than previously thought, 2) temperature tuning provides a better quality
/ diversity trade-off than adversarial training while being easier to train,
easier to cross-validate, and less computationally expensive. Code to reproduce
the experiments is available at github.com/pclucas14/GansFallingShort |
---|---|
DOI: | 10.48550/arxiv.1811.02549 |