AC-EVAL: Evaluating Ancient Chinese Language Understanding in Large Language Models
Given the importance of ancient Chinese in capturing the essence of rich historical and cultural heritage, the rapid advancements in Large Language Models (LLMs) necessitate benchmarks that can effectively evaluate their understanding of ancient contexts. To meet this need, we present AC-EVAL, an in...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Given the importance of ancient Chinese in capturing the essence of rich
historical and cultural heritage, the rapid advancements in Large Language
Models (LLMs) necessitate benchmarks that can effectively evaluate their
understanding of ancient contexts. To meet this need, we present AC-EVAL, an
innovative benchmark designed to assess the advanced knowledge and reasoning
capabilities of LLMs within the context of ancient Chinese. AC-EVAL is
structured across three levels of difficulty reflecting different facets of
language comprehension: general historical knowledge, short text understanding,
and long text comprehension. The benchmark comprises 13 tasks, spanning
historical facts, geography, social customs, art, philosophy, classical poetry
and prose, providing a comprehensive assessment framework. Our extensive
evaluation of top-performing LLMs, tailored for both English and Chinese,
reveals a substantial potential for enhancing ancient text comprehension. By
highlighting the strengths and weaknesses of LLMs, AC-EVAL aims to promote
their development and application forward in the realms of ancient Chinese
language education and scholarly research. The AC-EVAL data and evaluation code
are available at https://github.com/yuting-wei/AC-EVAL. |
---|---|
DOI: | 10.48550/arxiv.2403.06574 |