Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Information about pretraining corpora used to train the current
best-performing language models is seldom discussed: commercial models rarely
detail their data, and even open models are often released without accompanying
training data or recipes to reproduce them. As a result, it is challenging to
conduct and advance scientific research on language modeling, such as
understanding how training data impacts model capabilities and limitations. To
facilitate scientific research on language model pretraining, we curate and
release Dolma, a three-trillion-token English corpus, built from a diverse
mixture of web content, scientific papers, code, public-domain books, social
media, and encyclopedic materials. We extensively document Dolma, including its
design principles, details about its construction, and a summary of its
contents. We present analyses and experimental results on intermediate states
of Dolma to share what we have learned about important data curation practices.
Finally, we open-source our data curation toolkit to enable reproduction of our
work as well as support further research in large-scale data curation. |
---|---|
DOI: | 10.48550/arxiv.2402.00159 |