mHumanEval -- A Multilingual Benchmark to Evaluate Large Language Models for Code Generation
Recent advancements in large language models (LLMs) have significantly enhanced code generation from natural language prompts. The HumanEval Benchmark, developed by OpenAI, remains the most widely used code generation benchmark. However, this and other Code LLM benchmarks face critical limitations,...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advancements in large language models (LLMs) have significantly
enhanced code generation from natural language prompts. The HumanEval
Benchmark, developed by OpenAI, remains the most widely used code generation
benchmark. However, this and other Code LLM benchmarks face critical
limitations, particularly in task diversity, test coverage, and linguistic
scope. Current evaluations primarily focus on English-to-Python conversion
tasks with limited test cases, potentially overestimating model performance.
While recent works have addressed test coverage and programming language (PL)
diversity, code generation from low-resource language prompts remains largely
unexplored. To address this gap, we introduce mHumanEval, an extended benchmark
supporting prompts in over 200 natural languages. We employ established machine
translation methods to compile the benchmark, coupled with a quality assurance
process. Furthermore, we provide expert human translations for 15 diverse
natural languages (NLs). We conclude by analyzing the multilingual code
generation capabilities of state-of-the-art (SOTA) Code LLMs, offering insights
into the current landscape of cross-lingual code generation. |
---|---|
DOI: | 10.48550/arxiv.2410.15037 |