Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models
The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination. Despite improved performance, existing methods (including the state-of-the-art, Min-K%) are most...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The problem of pre-training data detection for large language models (LLMs)
has received growing attention due to its implications in critical issues like
copyright violation and test data contamination. Despite improved performance,
existing methods (including the state-of-the-art, Min-K%) are mostly developed
upon simple heuristics and lack solid, reasonable foundations. In this work, we
propose a novel and theoretically motivated methodology for pre-training data
detection, named Min-K%++. Specifically, we present a key insight that training
samples tend to be local maxima of the modeled distribution along each input
dimension through maximum likelihood training, which in turn allow us to
insightfully translate the problem into identification of local maxima. Then,
we design our method accordingly that works under the discrete distribution
modeled by LLMs, whose core idea is to determine whether the input forms a mode
or has relatively high probability under the conditional categorical
distribution. Empirically, the proposed method achieves new SOTA performance
across multiple settings. On the WikiMIA benchmark, Min-K%++ outperforms the
runner-up by 6.2% to 10.5% in detection AUROC averaged over five models. On the
more challenging MIMIR benchmark, it consistently improves upon reference-free
methods while performing on par with reference-based method that requires an
extra reference model. |
---|---|
DOI: | 10.48550/arxiv.2404.02936 |