ConStat: Performance-Based Contamination Detection in Large Language Models
Public benchmarks play an essential role in the evaluation of large language models. However, data contamination can lead to inflated performance, rendering them unreliable for model comparison. It is therefore crucial to detect contamination and estimate its impact on measured performance. Unfortun...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Public benchmarks play an essential role in the evaluation of large language
models. However, data contamination can lead to inflated performance, rendering
them unreliable for model comparison. It is therefore crucial to detect
contamination and estimate its impact on measured performance. Unfortunately,
existing detection methods can be easily evaded and fail to quantify
contamination. To overcome these limitations, we propose a novel definition of
contamination as artificially inflated and non-generalizing benchmark
performance instead of the inclusion of benchmark samples in the training data.
This perspective enables us to detect any model with inflated performance,
i.e., performance that does not generalize to rephrased samples, synthetic
samples from the same distribution, or different benchmarks for the same task.
Based on this insight, we develop ConStat, a statistical method that reliably
detects and quantifies contamination by comparing performance between a primary
and reference benchmark relative to a set of reference models. We demonstrate
the effectiveness of ConStat in an extensive evaluation of diverse model
architectures, benchmarks, and contamination scenarios and find high levels of
contamination in multiple popular models including Mistral, Llama, Yi, and the
top-3 Open LLM Leaderboard models. |
---|---|
DOI: | 10.48550/arxiv.2405.16281 |