CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs

Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the d...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ma, Qichao, Zhu, Rui-Jie, Liu, Peiye, Yan, Renye, Zhang, Fahong, Liang, Ling, Li, Meng, Yu, Zhaofei, Wang, Zongwei, Cai, Yimao, Huang, Tiejun
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ma, Qichao
Zhu, Rui-Jie
Liu, Peiye
Yan, Renye
Zhang, Fahong
Liang, Ling
Li, Meng
Yu, Zhaofei
Wang, Zongwei
Cai, Yimao
Huang, Tiejun
description Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the defense of identical copyrighted outputs or find interpretability by individual tokens with computational burdens. However, the gap between them exists, where direct assessments of how dataset contributions impact LLM outputs are missing. Once the model providers ensure copyright protection for data holders, a more mature LLM community can be established. To address these limitations, we introduce CopyLens, a new framework to analyze how copyrighted datasets may influence LLM responses. Specifically, a two-stage approach is employed: First, based on the uniqueness of pretraining data in the embedding space, token representations are initially fused for potential copyrighted texts, followed by a lightweight LSTM-based network to analyze dataset contributions. With such a prior, a contrastive-learning-based non-copyright OOD detector is designed. Our framework can dynamically face different situations and bridge the gap between current copyright detection methods. Experiments show that CopyLens improves efficiency and accuracy by 15.2% over our proposed baseline, 58.7% over prompt engineering methods, and 0.21 AUC over OOD detection baselines.
doi_str_mv 10.48550/arxiv.2410.04454
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_04454</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_04454</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_044543</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJiYmJpwMgQ75xdU-qTmFVspuFTmJeZmJifm5FQquOUkpqdn5qUrgKSLMtMzSlJTFIJLk3RdEksSi1NLgOJ5JUWZSaUlmfl5xQol-Qo-Pr4K_qUlBaUlxTwMrGmJOcWpvFCam0HezTXE2UMXbH18QVFmbmJRZTzIGfFgZxgTVgEAxj09qg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs</title><source>arXiv.org</source><creator>Ma, Qichao ; Zhu, Rui-Jie ; Liu, Peiye ; Yan, Renye ; Zhang, Fahong ; Liang, Ling ; Li, Meng ; Yu, Zhaofei ; Wang, Zongwei ; Cai, Yimao ; Huang, Tiejun</creator><creatorcontrib>Ma, Qichao ; Zhu, Rui-Jie ; Liu, Peiye ; Yan, Renye ; Zhang, Fahong ; Liang, Ling ; Li, Meng ; Yu, Zhaofei ; Wang, Zongwei ; Cai, Yimao ; Huang, Tiejun</creatorcontrib><description>Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the defense of identical copyrighted outputs or find interpretability by individual tokens with computational burdens. However, the gap between them exists, where direct assessments of how dataset contributions impact LLM outputs are missing. Once the model providers ensure copyright protection for data holders, a more mature LLM community can be established. To address these limitations, we introduce CopyLens, a new framework to analyze how copyrighted datasets may influence LLM responses. Specifically, a two-stage approach is employed: First, based on the uniqueness of pretraining data in the embedding space, token representations are initially fused for potential copyrighted texts, followed by a lightweight LSTM-based network to analyze dataset contributions. With such a prior, a contrastive-learning-based non-copyright OOD detector is designed. Our framework can dynamically face different situations and bridge the gap between current copyright detection methods. Experiments show that CopyLens improves efficiency and accuracy by 15.2% over our proposed baseline, 58.7% over prompt engineering methods, and 0.21 AUC over OOD detection baselines.</description><identifier>DOI: 10.48550/arxiv.2410.04454</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.04454$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.04454$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ma, Qichao</creatorcontrib><creatorcontrib>Zhu, Rui-Jie</creatorcontrib><creatorcontrib>Liu, Peiye</creatorcontrib><creatorcontrib>Yan, Renye</creatorcontrib><creatorcontrib>Zhang, Fahong</creatorcontrib><creatorcontrib>Liang, Ling</creatorcontrib><creatorcontrib>Li, Meng</creatorcontrib><creatorcontrib>Yu, Zhaofei</creatorcontrib><creatorcontrib>Wang, Zongwei</creatorcontrib><creatorcontrib>Cai, Yimao</creatorcontrib><creatorcontrib>Huang, Tiejun</creatorcontrib><title>CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs</title><description>Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the defense of identical copyrighted outputs or find interpretability by individual tokens with computational burdens. However, the gap between them exists, where direct assessments of how dataset contributions impact LLM outputs are missing. Once the model providers ensure copyright protection for data holders, a more mature LLM community can be established. To address these limitations, we introduce CopyLens, a new framework to analyze how copyrighted datasets may influence LLM responses. Specifically, a two-stage approach is employed: First, based on the uniqueness of pretraining data in the embedding space, token representations are initially fused for potential copyrighted texts, followed by a lightweight LSTM-based network to analyze dataset contributions. With such a prior, a contrastive-learning-based non-copyright OOD detector is designed. Our framework can dynamically face different situations and bridge the gap between current copyright detection methods. Experiments show that CopyLens improves efficiency and accuracy by 15.2% over our proposed baseline, 58.7% over prompt engineering methods, and 0.21 AUC over OOD detection baselines.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMgEKGJiYmJpwMgQ75xdU-qTmFVspuFTmJeZmJifm5FQquOUkpqdn5qUrgKSLMtMzSlJTFIJLk3RdEksSi1NLgOJ5JUWZSaUlmfl5xQol-Qo-Pr4K_qUlBaUlxTwMrGmJOcWpvFCam0HezTXE2UMXbH18QVFmbmJRZTzIGfFgZxgTVgEAxj09qg</recordid><startdate>20241006</startdate><enddate>20241006</enddate><creator>Ma, Qichao</creator><creator>Zhu, Rui-Jie</creator><creator>Liu, Peiye</creator><creator>Yan, Renye</creator><creator>Zhang, Fahong</creator><creator>Liang, Ling</creator><creator>Li, Meng</creator><creator>Yu, Zhaofei</creator><creator>Wang, Zongwei</creator><creator>Cai, Yimao</creator><creator>Huang, Tiejun</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241006</creationdate><title>CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs</title><author>Ma, Qichao ; Zhu, Rui-Jie ; Liu, Peiye ; Yan, Renye ; Zhang, Fahong ; Liang, Ling ; Li, Meng ; Yu, Zhaofei ; Wang, Zongwei ; Cai, Yimao ; Huang, Tiejun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_044543</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Ma, Qichao</creatorcontrib><creatorcontrib>Zhu, Rui-Jie</creatorcontrib><creatorcontrib>Liu, Peiye</creatorcontrib><creatorcontrib>Yan, Renye</creatorcontrib><creatorcontrib>Zhang, Fahong</creatorcontrib><creatorcontrib>Liang, Ling</creatorcontrib><creatorcontrib>Li, Meng</creatorcontrib><creatorcontrib>Yu, Zhaofei</creatorcontrib><creatorcontrib>Wang, Zongwei</creatorcontrib><creatorcontrib>Cai, Yimao</creatorcontrib><creatorcontrib>Huang, Tiejun</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ma, Qichao</au><au>Zhu, Rui-Jie</au><au>Liu, Peiye</au><au>Yan, Renye</au><au>Zhang, Fahong</au><au>Liang, Ling</au><au>Li, Meng</au><au>Yu, Zhaofei</au><au>Wang, Zongwei</au><au>Cai, Yimao</au><au>Huang, Tiejun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs</atitle><date>2024-10-06</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the defense of identical copyrighted outputs or find interpretability by individual tokens with computational burdens. However, the gap between them exists, where direct assessments of how dataset contributions impact LLM outputs are missing. Once the model providers ensure copyright protection for data holders, a more mature LLM community can be established. To address these limitations, we introduce CopyLens, a new framework to analyze how copyrighted datasets may influence LLM responses. Specifically, a two-stage approach is employed: First, based on the uniqueness of pretraining data in the embedding space, token representations are initially fused for potential copyrighted texts, followed by a lightweight LSTM-based network to analyze dataset contributions. With such a prior, a contrastive-learning-based non-copyright OOD detector is designed. Our framework can dynamically face different situations and bridge the gap between current copyright detection methods. Experiments show that CopyLens improves efficiency and accuracy by 15.2% over our proposed baseline, 58.7% over prompt engineering methods, and 0.21 AUC over OOD detection baselines.</abstract><doi>10.48550/arxiv.2410.04454</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.04454
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_04454
source arXiv.org
subjects Computer Science - Computation and Language
title CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-07T12%3A37%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=CopyLens:%20Dynamically%20Flagging%20Copyrighted%20Sub-Dataset%20Contributions%20to%20LLM%20Outputs&rft.au=Ma,%20Qichao&rft.date=2024-10-06&rft_id=info:doi/10.48550/arxiv.2410.04454&rft_dat=%3Carxiv_GOX%3E2410_04454%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true