Scaling Laws of Decoder-Only Models on the Multilingual Machine Translation Task
Recent studies have showcased remarkable capabilities of decoder-only models in many NLP tasks, including translation. Yet, the machine translation field has been largely dominated by encoder-decoder models based on the Transformer architecture. As a consequence, scaling laws of encoder-decoder mode...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies have showcased remarkable capabilities of decoder-only models
in many NLP tasks, including translation. Yet, the machine translation field
has been largely dominated by encoder-decoder models based on the Transformer
architecture. As a consequence, scaling laws of encoder-decoder models for
neural machine translation have already been well studied, but decoder-only
models have received less attention. This work explores the scaling laws of
decoder-only models on the multilingual and multidomain translation task. We
trained a collection of six decoder-only models, ranging from 70M to 7B
parameters, on a sentence-level, multilingual and multidomain dataset. We
conducted a series of experiments showing that the loss of decoder-only models
can be estimated using a scaling law similar to the one discovered for large
language models, but we also show that this scaling law has difficulties to
generalize to too large models or to a different data distribution. We also
study different scaling methods and show that scaling the depth and the width
of a model lead to similar test loss improvements, but with different impact on
the model's efficiency. |
---|---|
DOI: | 10.48550/arxiv.2409.15051 |