MdEval: Massively Multilingual Code Debugging
Code large language models (LLMs) have made significant progress in code debugging by directly generating the correct code based on the buggy code snippet. Programming benchmarks, typically consisting of buggy code snippet and their associated test cases, are used to assess the debugging capabilitie...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Code large language models (LLMs) have made significant progress in code
debugging by directly generating the correct code based on the buggy code
snippet. Programming benchmarks, typically consisting of buggy code snippet and
their associated test cases, are used to assess the debugging capabilities of
LLMs. However, many existing benchmarks primarily focus on Python and are often
limited in terms of language diversity (e.g., DebugBench and DebugEval). To
advance the field of multilingual debugging with LLMs, we propose the first
massively multilingual debugging benchmark, which includes 3.6K test samples of
18 programming languages and covers the automated program repair (APR) task,
the code review (CR) task, and the bug identification (BI) task. Further, we
introduce the debugging instruction corpora MDEVAL-INSTRUCT by injecting bugs
into the correct multilingual queries and solutions (xDebugGen). Further, a
multilingual debugger xDebugCoder trained on MDEVAL-INSTRUCT as a strong
baseline specifically to handle the bugs of a wide range of programming
languages (e.g. "Missing Mut" in language Rust and "Misused Macro Definition"
in language C). Our extensive experiments on MDEVAL reveal a notable
performance gap between open-source models and closed-source LLMs (e.g., GPT
and Claude series), highlighting huge room for improvement in multilingual code
debugging scenarios. |
---|---|
DOI: | 10.48550/arxiv.2411.02310 |