On Inter-dataset Code Duplication and Data Leakage in Large Language Models
Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on small...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Motivation. Large language models (LLMs) have exhibited remarkable
proficiency in diverse software engineering (SE) tasks. Handling such tasks
typically involves acquiring foundational coding knowledge on large,
general-purpose datasets during a pre-training phase, and subsequently refining
on smaller, task-specific datasets as part of a fine-tuning phase.
Problem statement. While intra-dataset code duplication examines the
intersection between the training and test splits within a given dataset and
has been addressed in prior research, inter-dataset code duplication, which
gauges the overlap between different datasets, remains largely unexplored. If
this phenomenon exists, it could compromise the integrity of LLM evaluations
because of the inclusion of fine-tuning test samples that were already
encountered during pre-training, resulting in inflated performance metrics.
Contribution. This paper explores the phenomenon of inter-dataset code
duplication and its impact on evaluating LLMs across diverse SE tasks.
Study design. We conduct an empirical study using the CodeSearchNet dataset
(CSN), a widely adopted pre-training dataset, and five fine-tuning datasets
used for various se tasks. We first identify the intersection between the
pre-training and fine-tuning datasets using a deduplication process. Next, we
pre-train two versions of LLMs using a subset of CSN: one leaky LLM and one
non-leaky LLM. Finally, we fine-tune both models and compare their performances
using leaky fine-tuning test samples.
Results. Our findings reveal a potential threat to the evaluation of LLMs
across multiple SE tasks, stemming from the inter-dataset code duplication
phenomenon. We also demonstrate that this threat is accentuated by the chosen
fine-tuning technique. Furthermore, we provide evidence that open-source models
could be affected by inter-dataset duplication. |
---|---|
DOI: | 10.48550/arxiv.2401.07930 |