What's In My Big Data?
Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination). In this work, we propose What's In My Big Data? (WIMBD), a p...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large text corpora are the backbone of language models. However, we have a
limited understanding of the content of these corpora, including general
statistics, quality, social factors, and inclusion of evaluation data
(contamination). In this work, we propose What's In My Big Data? (WIMBD), a
platform and a set of sixteen analyses that allow us to reveal and compare the
contents of large text corpora. WIMBD builds on two basic capabilities -- count
and search -- at scale, which allows us to analyze more than 35 terabytes on a
standard compute node. We apply WIMBD to ten different corpora used to train
popular language models, including C4, The Pile, and RedPajama. Our analysis
uncovers several surprising and previously undocumented findings about these
corpora, including the high prevalence of duplicate, synthetic, and low-quality
content, personally identifiable information, toxic language, and benchmark
contamination. For instance, we find that about 50% of the documents in
RedPajama and LAION-2B-en are duplicates. In addition, several datasets used
for benchmarking models trained on such corpora are contaminated with respect
to important benchmarks, including the Winograd Schema Challenge and parts of
GLUE and SuperGLUE. We open-source WIMBD's code and artifacts to provide a
standard set of evaluations for new text-based corpora and to encourage more
analyses and transparency around them. |
---|---|
DOI: | 10.48550/arxiv.2310.20707 |