HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization
Large language models (LLMs) have made significant progress in generating codes from textual prompts. However, existing benchmarks have mainly concentrated on translating English prompts to multilingual codes or have been constrained to very limited natural languages (NLs). These benchmarks have ove...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-03 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Peng, Qiwei Chai, Yekun Li, Xuhong |
description | Large language models (LLMs) have made significant progress in generating codes from textual prompts. However, existing benchmarks have mainly concentrated on translating English prompts to multilingual codes or have been constrained to very limited natural languages (NLs). These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs. In response, we introduce HumanEval-XL, a massively multilingual code generation benchmark specifically crafted to address this deficiency. HumanEval-XL establishes connections between 23 NLs and 12 programming languages (PLs), and comprises of a collection of 22,080 prompts with an average of 8.33 test cases. By ensuring parallel data across multiple NLs and PLs, HumanEval-XL offers a comprehensive evaluation platform for multilingual LLMs, allowing the assessment of the understanding of different NLs. Our work serves as a pioneering step towards filling the void in evaluating NL generalization in the area of multilingual code generation. We make our evaluation code and data publicly available at \url{https://github.com/FloatAI/humaneval-xl}. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2932315990</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2932315990</sourcerecordid><originalsourceid>FETCH-proquest_journals_29323159903</originalsourceid><addsrcrecordid>eNqNi7sKwjAARYMgWLT_EHAOpIlV66al2qE6ObiVoGmbmiaah4NfbxXdnQ6Xe84ABITSCC1nhIxAaG2LMSbzBYljGoAm9x1T2YNJdCpWcA33Xjohhao9kzDVFw53XHHDnNAKbrg6Nx0zV1hpA1OjrUU_98CcNz0L9t71r5Pi-WknYFgxaXn45RhMt9kxzdHN6Lvn1pWt9kb1V0kSSmgUJwmm_1kv8Y5Heg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2932315990</pqid></control><display><type>article</type><title>HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization</title><source>Freely Accessible Journals</source><creator>Peng, Qiwei ; Chai, Yekun ; Li, Xuhong</creator><creatorcontrib>Peng, Qiwei ; Chai, Yekun ; Li, Xuhong</creatorcontrib><description>Large language models (LLMs) have made significant progress in generating codes from textual prompts. However, existing benchmarks have mainly concentrated on translating English prompts to multilingual codes or have been constrained to very limited natural languages (NLs). These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs. In response, we introduce HumanEval-XL, a massively multilingual code generation benchmark specifically crafted to address this deficiency. HumanEval-XL establishes connections between 23 NLs and 12 programming languages (PLs), and comprises of a collection of 22,080 prompts with an average of 8.33 test cases. By ensuring parallel data across multiple NLs and PLs, HumanEval-XL offers a comprehensive evaluation platform for multilingual LLMs, allowing the assessment of the understanding of different NLs. Our work serves as a pioneering step towards filling the void in evaluating NL generalization in the area of multilingual code generation. We make our evaluation code and data publicly available at \url{https://github.com/FloatAI/humaneval-xl}.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Large language models ; Multilingualism ; Programming languages ; Translating</subject><ispartof>arXiv.org, 2024-03</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>781,785</link.rule.ids></links><search><creatorcontrib>Peng, Qiwei</creatorcontrib><creatorcontrib>Chai, Yekun</creatorcontrib><creatorcontrib>Li, Xuhong</creatorcontrib><title>HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization</title><title>arXiv.org</title><description>Large language models (LLMs) have made significant progress in generating codes from textual prompts. However, existing benchmarks have mainly concentrated on translating English prompts to multilingual codes or have been constrained to very limited natural languages (NLs). These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs. In response, we introduce HumanEval-XL, a massively multilingual code generation benchmark specifically crafted to address this deficiency. HumanEval-XL establishes connections between 23 NLs and 12 programming languages (PLs), and comprises of a collection of 22,080 prompts with an average of 8.33 test cases. By ensuring parallel data across multiple NLs and PLs, HumanEval-XL offers a comprehensive evaluation platform for multilingual LLMs, allowing the assessment of the understanding of different NLs. Our work serves as a pioneering step towards filling the void in evaluating NL generalization in the area of multilingual code generation. We make our evaluation code and data publicly available at \url{https://github.com/FloatAI/humaneval-xl}.</description><subject>Benchmarks</subject><subject>Large language models</subject><subject>Multilingualism</subject><subject>Programming languages</subject><subject>Translating</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNi7sKwjAARYMgWLT_EHAOpIlV66al2qE6ObiVoGmbmiaah4NfbxXdnQ6Xe84ABITSCC1nhIxAaG2LMSbzBYljGoAm9x1T2YNJdCpWcA33Xjohhao9kzDVFw53XHHDnNAKbrg6Nx0zV1hpA1OjrUU_98CcNz0L9t71r5Pi-WknYFgxaXn45RhMt9kxzdHN6Lvn1pWt9kb1V0kSSmgUJwmm_1kv8Y5Heg</recordid><startdate>20240324</startdate><enddate>20240324</enddate><creator>Peng, Qiwei</creator><creator>Chai, Yekun</creator><creator>Li, Xuhong</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240324</creationdate><title>HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization</title><author>Peng, Qiwei ; Chai, Yekun ; Li, Xuhong</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29323159903</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Large language models</topic><topic>Multilingualism</topic><topic>Programming languages</topic><topic>Translating</topic><toplevel>online_resources</toplevel><creatorcontrib>Peng, Qiwei</creatorcontrib><creatorcontrib>Chai, Yekun</creatorcontrib><creatorcontrib>Li, Xuhong</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Peng, Qiwei</au><au>Chai, Yekun</au><au>Li, Xuhong</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization</atitle><jtitle>arXiv.org</jtitle><date>2024-03-24</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Large language models (LLMs) have made significant progress in generating codes from textual prompts. However, existing benchmarks have mainly concentrated on translating English prompts to multilingual codes or have been constrained to very limited natural languages (NLs). These benchmarks have overlooked the vast landscape of massively multilingual NL to multilingual code, leaving a critical gap in the evaluation of multilingual LLMs. In response, we introduce HumanEval-XL, a massively multilingual code generation benchmark specifically crafted to address this deficiency. HumanEval-XL establishes connections between 23 NLs and 12 programming languages (PLs), and comprises of a collection of 22,080 prompts with an average of 8.33 test cases. By ensuring parallel data across multiple NLs and PLs, HumanEval-XL offers a comprehensive evaluation platform for multilingual LLMs, allowing the assessment of the understanding of different NLs. Our work serves as a pioneering step towards filling the void in evaluating NL generalization in the area of multilingual code generation. We make our evaluation code and data publicly available at \url{https://github.com/FloatAI/humaneval-xl}.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2932315990 |
source | Freely Accessible Journals |
subjects | Benchmarks Large language models Multilingualism Programming languages Translating |
title | HumanEval-XL: A Multilingual Code Generation Benchmark for Cross-lingual Natural Language Generalization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-15T10%3A13%3A36IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=HumanEval-XL:%20A%20Multilingual%20Code%20Generation%20Benchmark%20for%20Cross-lingual%20Natural%20Language%20Generalization&rft.jtitle=arXiv.org&rft.au=Peng,%20Qiwei&rft.date=2024-03-24&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2932315990%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2932315990&rft_id=info:pmid/&rfr_iscdi=true |