CopyLens: Dynamically Flagging Copyrighted Sub-Dataset Contributions to LLM Outputs
Large Language Models (LLMs) have become pervasive due to their knowledge absorption and text-generation capabilities. Concurrently, the copyright issue for pretraining datasets has been a pressing concern, particularly when generation includes specific styles. Previous methods either focus on the d...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have become pervasive due to their knowledge
absorption and text-generation capabilities. Concurrently, the copyright issue
for pretraining datasets has been a pressing concern, particularly when
generation includes specific styles. Previous methods either focus on the
defense of identical copyrighted outputs or find interpretability by individual
tokens with computational burdens. However, the gap between them exists, where
direct assessments of how dataset contributions impact LLM outputs are missing.
Once the model providers ensure copyright protection for data holders, a more
mature LLM community can be established. To address these limitations, we
introduce CopyLens, a new framework to analyze how copyrighted datasets may
influence LLM responses. Specifically, a two-stage approach is employed: First,
based on the uniqueness of pretraining data in the embedding space, token
representations are initially fused for potential copyrighted texts, followed
by a lightweight LSTM-based network to analyze dataset contributions. With such
a prior, a contrastive-learning-based non-copyright OOD detector is designed.
Our framework can dynamically face different situations and bridge the gap
between current copyright detection methods. Experiments show that CopyLens
improves efficiency and accuracy by 15.2% over our proposed baseline, 58.7%
over prompt engineering methods, and 0.21 AUC over OOD detection baselines. |
---|---|
DOI: | 10.48550/arxiv.2410.04454 |