MixEval: Deriving Wisdom of the Crowd from LLM Benchmark Mixtures
Evaluating large language models (LLMs) is challenging. Traditional ground-truth-based benchmarks fail to capture the comprehensiveness and nuance of real-world queries, while LLM-as-judge benchmarks suffer from grading biases and limited query quantity. Both of them may also become contaminated ove...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Evaluating large language models (LLMs) is challenging. Traditional
ground-truth-based benchmarks fail to capture the comprehensiveness and nuance
of real-world queries, while LLM-as-judge benchmarks suffer from grading biases
and limited query quantity. Both of them may also become contaminated over
time. User-facing evaluation, such as Chatbot Arena, provides reliable signals
but is costly and slow. In this work, we propose MixEval, a new paradigm for
establishing efficient, gold-standard LLM evaluation by strategically mixing
off-the-shelf benchmarks. It bridges (1) comprehensive and well-distributed
real-world user queries and (2) efficient and fairly-graded ground-truth-based
benchmarks, by matching queries mined from the web with similar queries from
existing benchmarks. Based on MixEval, we further build MixEval-Hard, which
offers more room for model improvement. Our benchmarks' advantages lie in (1) a
0.96 model ranking correlation with Chatbot Arena arising from the highly
impartial query distribution and grading mechanism, (2) fast, cheap, and
reproducible execution (6% of the time and cost of MMLU), and (3) dynamic
evaluation enabled by the rapid and stable data update pipeline. We provide
extensive meta-evaluation and analysis for our and existing LLM benchmarks to
deepen the community's understanding of LLM evaluation and guide future
research directions. |
---|---|
DOI: | 10.48550/arxiv.2406.06565 |