Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models
Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evalu...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have demonstrated remarkable performance in the
legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However
their efficacy remains limited for non-standardized tasks and tasks in
languages other than English. This underscores the need for careful evaluation
of LLMs within each legal system before application. Here, we introduce KBL, a
benchmark for assessing the Korean legal language understanding of LLMs,
consisting of (1) 7 legal knowledge tasks (510 examples), (2) 4 legal reasoning
tasks (288 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510
examples). First two datasets were developed in close collaboration with
lawyers to evaluate LLMs in practical scenarios in a certified manner.
Furthermore, considering legal practitioners' frequent use of extensive legal
documents for research, we assess LLMs in both a closed book setting, where
they rely solely on internal knowledge, and a retrieval-augmented generation
(RAG) setting, using a corpus of Korean statutes and precedents. The results
indicate substantial room and opportunities for improvement. |
---|---|
DOI: | 10.48550/arxiv.2410.08731 |