KoLA: Carefully Benchmarking World Knowledge of Large Language Models
The unprecedented performance of large language models (LLMs) necessitates improvements in evaluations. Rather than merely exploring the breadth of LLM abilities, we believe meticulous and thoughtful designs are essential to thorough, unbiased, and applicable evaluations. Given the importance of wor...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The unprecedented performance of large language models (LLMs) necessitates
improvements in evaluations. Rather than merely exploring the breadth of LLM
abilities, we believe meticulous and thoughtful designs are essential to
thorough, unbiased, and applicable evaluations. Given the importance of world
knowledge to LLMs, we construct a Knowledge-oriented LLM Assessment benchmark
(KoLA), in which we carefully design three crucial factors: (1) For
\textbf{ability modeling}, we mimic human cognition to form a four-level
taxonomy of knowledge-related abilities, covering $19$ tasks. (2) For
\textbf{data}, to ensure fair comparisons, we use both Wikipedia, a corpus
prevalently pre-trained by LLMs, along with continuously collected emerging
corpora, aiming to evaluate the capacity to handle unseen data and evolving
knowledge. (3) For \textbf{evaluation criteria}, we adopt a contrastive system,
including overall standard scores for better numerical comparability across
tasks and models and a unique self-contrast metric for automatically evaluating
knowledge-creating ability. We evaluate $28$ open-source and commercial LLMs
and obtain some intriguing findings. The KoLA dataset and open-participation
leaderboard are publicly released at https://kola.xlore.cn and will be
continuously updated to provide references for developing LLMs and
knowledge-related systems. |
---|---|
DOI: | 10.48550/arxiv.2306.09296 |