On Inter-dataset Code Duplication and Data Leakage in Large Language Models

Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on small...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: López, José Antonio Hernández, Chen, Boqi, Saaz, Mootez, Sharma, Tushar, Varró, Dániel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator López, José Antonio Hernández
Chen, Boqi
Saaz, Mootez
Sharma, Tushar
Varró, Dániel
description Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on smaller, task-specific datasets as part of a fine-tuning phase. Problem statement. While intra-dataset code duplication examines the intersection between the training and test splits within a given dataset and has been addressed in prior research, inter-dataset code duplication, which gauges the overlap between different datasets, remains largely unexplored. If this phenomenon exists, it could compromise the integrity of LLM evaluations because of the inclusion of fine-tuning test samples that were already encountered during pre-training, resulting in inflated performance metrics. Contribution. This paper explores the phenomenon of inter-dataset code duplication and its impact on evaluating LLMs across diverse SE tasks. Study design. We conduct an empirical study using the CodeSearchNet dataset (CSN), a widely adopted pre-training dataset, and five fine-tuning datasets used for various se tasks. We first identify the intersection between the pre-training and fine-tuning datasets using a deduplication process. Next, we pre-train two versions of LLMs using a subset of CSN: one leaky LLM and one non-leaky LLM. Finally, we fine-tune both models and compare their performances using leaky fine-tuning test samples. Results. Our findings reveal a potential threat to the evaluation of LLMs across multiple SE tasks, stemming from the inter-dataset code duplication phenomenon. We also demonstrate that this threat is accentuated by the chosen fine-tuning technique. Furthermore, we provide evidence that open-source models could be affected by inter-dataset duplication.
doi_str_mv 10.48550/arxiv.2401.07930
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2401_07930</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2401_07930</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-dcd419905fea5e6feddb1a86797a002cae481e4746d6e0ed9c8e2552fe6a9b843</originalsourceid><addsrcrecordid>eNotj8tOwzAURL1hgQofwAr_QMJ14ucSpTwqjLrpPrqNbyqrqVs5KYK_Jy2sZjSjGekw9iCglFYpeML8Hb_KSoIowbgabtnHOvFVmigXASccaeLNMRBfnk9D7HCKx8QxBb6cS-4J97gjHhP3mGfjMe3Ol-Rz3gzjHbvpcRjp_l8XbPP6smneC79-WzXPvkBtoAhdkMI5UD2hIt1TCFuBVhtnEKDqkKQVJI3UQRNQcJ2lSqmqJ41ua2W9YI9_t1ea9pTjAfNPe6Fqr1T1L_wwR0Q</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>On Inter-dataset Code Duplication and Data Leakage in Large Language Models</title><source>arXiv.org</source><creator>López, José Antonio Hernández ; Chen, Boqi ; Saaz, Mootez ; Sharma, Tushar ; Varró, Dániel</creator><creatorcontrib>López, José Antonio Hernández ; Chen, Boqi ; Saaz, Mootez ; Sharma, Tushar ; Varró, Dániel</creatorcontrib><description>Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on smaller, task-specific datasets as part of a fine-tuning phase. Problem statement. While intra-dataset code duplication examines the intersection between the training and test splits within a given dataset and has been addressed in prior research, inter-dataset code duplication, which gauges the overlap between different datasets, remains largely unexplored. If this phenomenon exists, it could compromise the integrity of LLM evaluations because of the inclusion of fine-tuning test samples that were already encountered during pre-training, resulting in inflated performance metrics. Contribution. This paper explores the phenomenon of inter-dataset code duplication and its impact on evaluating LLMs across diverse SE tasks. Study design. We conduct an empirical study using the CodeSearchNet dataset (CSN), a widely adopted pre-training dataset, and five fine-tuning datasets used for various se tasks. We first identify the intersection between the pre-training and fine-tuning datasets using a deduplication process. Next, we pre-train two versions of LLMs using a subset of CSN: one leaky LLM and one non-leaky LLM. Finally, we fine-tune both models and compare their performances using leaky fine-tuning test samples. Results. Our findings reveal a potential threat to the evaluation of LLMs across multiple SE tasks, stemming from the inter-dataset code duplication phenomenon. We also demonstrate that this threat is accentuated by the chosen fine-tuning technique. Furthermore, we provide evidence that open-source models could be affected by inter-dataset duplication.</description><identifier>DOI: 10.48550/arxiv.2401.07930</identifier><language>eng</language><subject>Computer Science - Software Engineering</subject><creationdate>2024-01</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2401.07930$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2401.07930$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>López, José Antonio Hernández</creatorcontrib><creatorcontrib>Chen, Boqi</creatorcontrib><creatorcontrib>Saaz, Mootez</creatorcontrib><creatorcontrib>Sharma, Tushar</creatorcontrib><creatorcontrib>Varró, Dániel</creatorcontrib><title>On Inter-dataset Code Duplication and Data Leakage in Large Language Models</title><description>Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on smaller, task-specific datasets as part of a fine-tuning phase. Problem statement. While intra-dataset code duplication examines the intersection between the training and test splits within a given dataset and has been addressed in prior research, inter-dataset code duplication, which gauges the overlap between different datasets, remains largely unexplored. If this phenomenon exists, it could compromise the integrity of LLM evaluations because of the inclusion of fine-tuning test samples that were already encountered during pre-training, resulting in inflated performance metrics. Contribution. This paper explores the phenomenon of inter-dataset code duplication and its impact on evaluating LLMs across diverse SE tasks. Study design. We conduct an empirical study using the CodeSearchNet dataset (CSN), a widely adopted pre-training dataset, and five fine-tuning datasets used for various se tasks. We first identify the intersection between the pre-training and fine-tuning datasets using a deduplication process. Next, we pre-train two versions of LLMs using a subset of CSN: one leaky LLM and one non-leaky LLM. Finally, we fine-tune both models and compare their performances using leaky fine-tuning test samples. Results. Our findings reveal a potential threat to the evaluation of LLMs across multiple SE tasks, stemming from the inter-dataset code duplication phenomenon. We also demonstrate that this threat is accentuated by the chosen fine-tuning technique. Furthermore, we provide evidence that open-source models could be affected by inter-dataset duplication.</description><subject>Computer Science - Software Engineering</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAURL1hgQofwAr_QMJ14ucSpTwqjLrpPrqNbyqrqVs5KYK_Jy2sZjSjGekw9iCglFYpeML8Hb_KSoIowbgabtnHOvFVmigXASccaeLNMRBfnk9D7HCKx8QxBb6cS-4J97gjHhP3mGfjMe3Ol-Rz3gzjHbvpcRjp_l8XbPP6smneC79-WzXPvkBtoAhdkMI5UD2hIt1TCFuBVhtnEKDqkKQVJI3UQRNQcJ2lSqmqJ41ua2W9YI9_t1ea9pTjAfNPe6Fqr1T1L_wwR0Q</recordid><startdate>20240115</startdate><enddate>20240115</enddate><creator>López, José Antonio Hernández</creator><creator>Chen, Boqi</creator><creator>Saaz, Mootez</creator><creator>Sharma, Tushar</creator><creator>Varró, Dániel</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240115</creationdate><title>On Inter-dataset Code Duplication and Data Leakage in Large Language Models</title><author>López, José Antonio Hernández ; Chen, Boqi ; Saaz, Mootez ; Sharma, Tushar ; Varró, Dániel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-dcd419905fea5e6feddb1a86797a002cae481e4746d6e0ed9c8e2552fe6a9b843</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Software Engineering</topic><toplevel>online_resources</toplevel><creatorcontrib>López, José Antonio Hernández</creatorcontrib><creatorcontrib>Chen, Boqi</creatorcontrib><creatorcontrib>Saaz, Mootez</creatorcontrib><creatorcontrib>Sharma, Tushar</creatorcontrib><creatorcontrib>Varró, Dániel</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>López, José Antonio Hernández</au><au>Chen, Boqi</au><au>Saaz, Mootez</au><au>Sharma, Tushar</au><au>Varró, Dániel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>On Inter-dataset Code Duplication and Data Leakage in Large Language Models</atitle><date>2024-01-15</date><risdate>2024</risdate><abstract>Motivation. Large language models (LLMs) have exhibited remarkable proficiency in diverse software engineering (SE) tasks. Handling such tasks typically involves acquiring foundational coding knowledge on large, general-purpose datasets during a pre-training phase, and subsequently refining on smaller, task-specific datasets as part of a fine-tuning phase. Problem statement. While intra-dataset code duplication examines the intersection between the training and test splits within a given dataset and has been addressed in prior research, inter-dataset code duplication, which gauges the overlap between different datasets, remains largely unexplored. If this phenomenon exists, it could compromise the integrity of LLM evaluations because of the inclusion of fine-tuning test samples that were already encountered during pre-training, resulting in inflated performance metrics. Contribution. This paper explores the phenomenon of inter-dataset code duplication and its impact on evaluating LLMs across diverse SE tasks. Study design. We conduct an empirical study using the CodeSearchNet dataset (CSN), a widely adopted pre-training dataset, and five fine-tuning datasets used for various se tasks. We first identify the intersection between the pre-training and fine-tuning datasets using a deduplication process. Next, we pre-train two versions of LLMs using a subset of CSN: one leaky LLM and one non-leaky LLM. Finally, we fine-tune both models and compare their performances using leaky fine-tuning test samples. Results. Our findings reveal a potential threat to the evaluation of LLMs across multiple SE tasks, stemming from the inter-dataset code duplication phenomenon. We also demonstrate that this threat is accentuated by the chosen fine-tuning technique. Furthermore, we provide evidence that open-source models could be affected by inter-dataset duplication.</abstract><doi>10.48550/arxiv.2401.07930</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2401.07930
ispartof
issn
language eng
recordid cdi_arxiv_primary_2401_07930
source arXiv.org
subjects Computer Science - Software Engineering
title On Inter-dataset Code Duplication and Data Leakage in Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-07T16%3A42%3A12IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=On%20Inter-dataset%20Code%20Duplication%20and%20Data%20Leakage%20in%20Large%20Language%20Models&rft.au=L%C3%B3pez,%20Jos%C3%A9%20Antonio%20Hern%C3%A1ndez&rft.date=2024-01-15&rft_id=info:doi/10.48550/arxiv.2401.07930&rft_dat=%3Carxiv_GOX%3E2401_07930%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true