KGQuiz: Evaluating the Generalization of Encoded Knowledge in Large Language Models
Large language models (LLMs) demonstrate remarkable performance on knowledge-intensive tasks, suggesting that real-world knowledge is encoded in their model parameters. However, besides explorations on a few probing tasks in limited knowledge domains, it is not well understood how to evaluate LLMs...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) demonstrate remarkable performance on
knowledge-intensive tasks, suggesting that real-world knowledge is encoded in
their model parameters. However, besides explorations on a few probing tasks in
limited knowledge domains, it is not well understood how to evaluate LLMs'
knowledge systematically and how well their knowledge abilities generalize,
across a spectrum of knowledge domains and progressively complex task formats.
To this end, we propose KGQuiz, a knowledge-intensive benchmark to
comprehensively investigate the knowledge generalization abilities of LLMs.
KGQuiz is a scalable framework constructed from triplet-based knowledge, which
covers three knowledge domains and consists of five tasks with increasing
complexity: true-or-false, multiple-choice QA, blank filling, factual editing,
and open-ended knowledge generation. To gain a better understanding of LLMs'
knowledge abilities and their generalization, we evaluate 10 open-source and
black-box LLMs on the KGQuiz benchmark across the five knowledge-intensive
tasks and knowledge domains. Extensive experiments demonstrate that LLMs
achieve impressive performance in straightforward knowledge QA tasks, while
settings and contexts requiring more complex reasoning or employing
domain-specific facts still present significant challenges. We envision KGQuiz
as a testbed to analyze such nuanced variations in performance across domains
and task formats, and ultimately to understand, evaluate, and improve LLMs'
knowledge abilities across a wide spectrum of knowledge domains and tasks. |
---|---|
DOI: | 10.48550/arxiv.2310.09725 |