LL\"aMmlein: Compact and Competitive German-Only Language Models from Scratch
We create two German-only decoder models, LL\"aMmlein 120M and 1B, transparently from scratch and publish them, along with the training data, for the German NLP research community to use. The model training involved several key steps, including extensive data preprocessing, the creation of a cu...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | We create two German-only decoder models, LL\"aMmlein 120M and 1B,
transparently from scratch and publish them, along with the training data, for
the German NLP research community to use. The model training involved several
key steps, including extensive data preprocessing, the creation of a custom
German tokenizer, the training itself, as well as the evaluation of the final
models on various benchmarks. Throughout the training process, multiple
checkpoints were saved and analyzed using the SuperGLEBer benchmark to monitor
the models' learning dynamics. Compared to state-of-the-art models on the
SuperGLEBer benchmark, both LL\"aMmlein models performed competitively,
consistently matching or surpassing models with similar parameter sizes. The
results show that the models' quality scales with size as expected, but
performance improvements on some tasks plateaued early, offering valuable
insights into resource allocation for future model development. |
---|---|
DOI: | 10.48550/arxiv.2411.11171 |