Data Processing for the OpenGPT-X Model Family
This paper presents a comprehensive overview of the data preparation pipeline developed for the OpenGPT-X project, a large-scale initiative aimed at creating open and high-performance multilingual large language models (LLMs). The project goal is to deliver models that cover all major European langu...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | This paper presents a comprehensive overview of the data preparation pipeline
developed for the OpenGPT-X project, a large-scale initiative aimed at creating
open and high-performance multilingual large language models (LLMs). The
project goal is to deliver models that cover all major European languages, with
a particular focus on real-world applications within the European Union. We
explain all data processing steps, starting with the data selection and
requirement definition to the preparation of the final datasets for model
training. We distinguish between curated data and web data, as each of these
categories is handled by distinct pipelines, with curated data undergoing
minimal filtering and web data requiring extensive filtering and deduplication.
This distinction guided the development of specialized algorithmic solutions
for both pipelines. In addition to describing the processing methodologies, we
provide an in-depth analysis of the datasets, increasing transparency and
alignment with European data regulations. Finally, we share key insights and
challenges faced during the project, offering recommendations for future
endeavors in large-scale multilingual data preparation for LLMs. |
---|---|
DOI: | 10.48550/arxiv.2410.08800 |