Assessing Code Generation with Intermediate Languages
Intermediate step methodologies like chain of thoughts (COT) have demonstrated effectiveness in enhancing the performance of Large Language Models (LLMs) on code generation. This study explores the utilization of intermediate languages, including various programming languages, natural language solut...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Intermediate step methodologies like chain of thoughts (COT) have
demonstrated effectiveness in enhancing the performance of Large Language
Models (LLMs) on code generation. This study explores the utilization of
intermediate languages, including various programming languages, natural
language solutions, and pseudo-code, and systematically evaluates their impact
on the performance of LLMs in code generation tasks. Our experiments encompass
eleven models across the CodeLlama, GPT, and Mistral families, as well as newly
released smaller models. Our findings reveal that intermediate languages
generally exhibit greater efficacy in larger models that have not yet achieved
state-of-the-art performance. Natural language consistently emerges as the most
effective intermediate representation across all target languages. However, we
observe no universally effective intermediate formal language across different
models and target languages. Furthermore, we uncover a weak correlation between
the correctness of intermediate solutions and final generation, suggesting that
improvements may stem from the chain-of-thought effect rather than
language-specific transfer. Interestingly, we discover that for GPT family
models, prompting multiple times without explicit self-correction instructions
yields performance gains across the studied languages. |
---|---|
DOI: | 10.48550/arxiv.2407.05411 |