Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models

Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evalu...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Kim, Yeeun, Choi, Young Rok, Choi, Eunkyung, Choi, Jinhwan, Park, Hai Jin, Hwang, Wonseok
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Kim, Yeeun
Choi, Young Rok
Choi, Eunkyung
Choi, Jinhwan
Park, Hai Jin
Hwang, Wonseok
description Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evaluation of LLMs within each legal system before application. Here, we introduce KBL, a benchmark for assessing the Korean legal language understanding of LLMs, consisting of (1) 7 legal knowledge tasks (510 examples), (2) 4 legal reasoning tasks (288 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510 examples). First two datasets were developed in close collaboration with lawyers to evaluate LLMs in practical scenarios in a certified manner. Furthermore, considering legal practitioners' frequent use of extensive legal documents for research, we assess LLMs in both a closed book setting, where they rely solely on internal knowledge, and a retrieval-augmented generation (RAG) setting, using a corpus of Korean statutes and precedents. The results indicate substantial room and opportunities for improvement.
doi_str_mv 10.48550/arxiv.2410.08731
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2410_08731</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2410_08731</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2410_087313</originalsourceid><addsrcrecordid>eNqFjs0KgkAUhWfTIqoHaNW8QKap5LZfggxa1Fouep0Gxxm5Y1Jv3xjRttWBcz44H2PTwPeiJI79BdBTdt4ycoWfrMJgyKoddqhMI7XgwC8EooZW5nyDOr_XQBUvDfG1tWhtz5wMIWieogDFU9DiAQL5TRdItgVd9Ix0O5Crf_vZFKjsmA1KUBYn3xyx2WF_3R7nH62sIekOX1mvl330wv_EG20sRh4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models</title><source>arXiv.org</source><creator>Kim, Yeeun ; Choi, Young Rok ; Choi, Eunkyung ; Choi, Jinhwan ; Park, Hai Jin ; Hwang, Wonseok</creator><creatorcontrib>Kim, Yeeun ; Choi, Young Rok ; Choi, Eunkyung ; Choi, Jinhwan ; Park, Hai Jin ; Hwang, Wonseok</creatorcontrib><description>Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evaluation of LLMs within each legal system before application. Here, we introduce KBL, a benchmark for assessing the Korean legal language understanding of LLMs, consisting of (1) 7 legal knowledge tasks (510 examples), (2) 4 legal reasoning tasks (288 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510 examples). First two datasets were developed in close collaboration with lawyers to evaluate LLMs in practical scenarios in a certified manner. Furthermore, considering legal practitioners' frequent use of extensive legal documents for research, we assess LLMs in both a closed book setting, where they rely solely on internal knowledge, and a retrieval-augmented generation (RAG) setting, using a corpus of Korean statutes and precedents. The results indicate substantial room and opportunities for improvement.</description><identifier>DOI: 10.48550/arxiv.2410.08731</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-10</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2410.08731$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2410.08731$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Kim, Yeeun</creatorcontrib><creatorcontrib>Choi, Young Rok</creatorcontrib><creatorcontrib>Choi, Eunkyung</creatorcontrib><creatorcontrib>Choi, Jinhwan</creatorcontrib><creatorcontrib>Park, Hai Jin</creatorcontrib><creatorcontrib>Hwang, Wonseok</creatorcontrib><title>Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models</title><description>Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evaluation of LLMs within each legal system before application. Here, we introduce KBL, a benchmark for assessing the Korean legal language understanding of LLMs, consisting of (1) 7 legal knowledge tasks (510 examples), (2) 4 legal reasoning tasks (288 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510 examples). First two datasets were developed in close collaboration with lawyers to evaluate LLMs in practical scenarios in a certified manner. Furthermore, considering legal practitioners' frequent use of extensive legal documents for research, we assess LLMs in both a closed book setting, where they rely solely on internal knowledge, and a retrieval-augmented generation (RAG) setting, using a corpus of Korean statutes and precedents. The results indicate substantial room and opportunities for improvement.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjs0KgkAUhWfTIqoHaNW8QKap5LZfggxa1Fouep0Gxxm5Y1Jv3xjRttWBcz44H2PTwPeiJI79BdBTdt4ycoWfrMJgyKoddqhMI7XgwC8EooZW5nyDOr_XQBUvDfG1tWhtz5wMIWieogDFU9DiAQL5TRdItgVd9Ix0O5Crf_vZFKjsmA1KUBYn3xyx2WF_3R7nH62sIekOX1mvl330wv_EG20sRh4</recordid><startdate>20241011</startdate><enddate>20241011</enddate><creator>Kim, Yeeun</creator><creator>Choi, Young Rok</creator><creator>Choi, Eunkyung</creator><creator>Choi, Jinhwan</creator><creator>Park, Hai Jin</creator><creator>Hwang, Wonseok</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241011</creationdate><title>Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models</title><author>Kim, Yeeun ; Choi, Young Rok ; Choi, Eunkyung ; Choi, Jinhwan ; Park, Hai Jin ; Hwang, Wonseok</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2410_087313</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Kim, Yeeun</creatorcontrib><creatorcontrib>Choi, Young Rok</creatorcontrib><creatorcontrib>Choi, Eunkyung</creatorcontrib><creatorcontrib>Choi, Jinhwan</creatorcontrib><creatorcontrib>Park, Hai Jin</creatorcontrib><creatorcontrib>Hwang, Wonseok</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Kim, Yeeun</au><au>Choi, Young Rok</au><au>Choi, Eunkyung</au><au>Choi, Jinhwan</au><au>Park, Hai Jin</au><au>Hwang, Wonseok</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models</atitle><date>2024-10-11</date><risdate>2024</risdate><abstract>Large language models (LLMs) have demonstrated remarkable performance in the legal domain, with GPT-4 even passing the Uniform Bar Exam in the U.S. However their efficacy remains limited for non-standardized tasks and tasks in languages other than English. This underscores the need for careful evaluation of LLMs within each legal system before application. Here, we introduce KBL, a benchmark for assessing the Korean legal language understanding of LLMs, consisting of (1) 7 legal knowledge tasks (510 examples), (2) 4 legal reasoning tasks (288 examples), and (3) the Korean bar exam (4 domains, 53 tasks, 2,510 examples). First two datasets were developed in close collaboration with lawyers to evaluate LLMs in practical scenarios in a certified manner. Furthermore, considering legal practitioners' frequent use of extensive legal documents for research, we assess LLMs in both a closed book setting, where they rely solely on internal knowledge, and a retrieval-augmented generation (RAG) setting, using a corpus of Korean statutes and precedents. The results indicate substantial room and opportunities for improvement.</abstract><doi>10.48550/arxiv.2410.08731</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2410.08731
ispartof
issn
language eng
recordid cdi_arxiv_primary_2410_08731
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T12%3A17%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Developing%20a%20Pragmatic%20Benchmark%20for%20Assessing%20Korean%20Legal%20Language%20Understanding%20in%20Large%20Language%20Models&rft.au=Kim,%20Yeeun&rft.date=2024-10-11&rft_id=info:doi/10.48550/arxiv.2410.08731&rft_dat=%3Carxiv_GOX%3E2410_08731%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true