Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines
Autoregressive language models are the currently dominant paradigm for text generation, but they have some fundamental limitations that cannot be remedied by scale-for example inherently sequential and unidirectional generation. While alternate classes of models have been explored, we have limited m...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Autoregressive language models are the currently dominant paradigm for text
generation, but they have some fundamental limitations that cannot be remedied
by scale-for example inherently sequential and unidirectional generation. While
alternate classes of models have been explored, we have limited mathematical
understanding of their fundamental power and limitations. In this paper we
focus on Generative Masked Language Models (GMLMs), a non-autoregressive
paradigm in which we train a model to fit conditional probabilities of the data
distribution via masking, which are subsequently used as inputs to a Markov
Chain to draw samples from the model, These models empirically strike a
promising speed-quality trade-off as each step can be typically parallelized by
decoding the entire sequence in parallel. We develop a mathematical framework
for analyzing and improving such models which sheds light on questions of
sample complexity and inference speed and quality. Empirically, we adapt the T5
model for iteratively-refined parallel decoding, achieving 2-3x speedup in
machine translation with minimal sacrifice in quality compared with
autoregressive models. We run careful ablation experiments to give
recommendations on key design choices, and make fine-grained observations on
the common error modes in connection with our theory. Our mathematical analyses
and empirical observations characterize both potentials and limitations of this
approach, and can be applied to future works on improving understanding and
performance of GMLMs. Our codes are released at
https://github.com/google-research/google-research/tree/master/padir |
---|---|
DOI: | 10.48550/arxiv.2407.21046 |