ConAIR:Consistency-Augmented Iterative Interaction Framework to Enhance the Reliability of Code Generation
Code generation techniques generate code snippets automatically based on the problem requirements in natural language. Recently, large language models (LLMs) achieve the SOTA performance on code generation. However, LLMs still struggle at times to generate accurate code, which diminishes their promi...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Code generation techniques generate code snippets automatically based on the
problem requirements in natural language. Recently, large language models
(LLMs) achieve the SOTA performance on code generation. However, LLMs still
struggle at times to generate accurate code, which diminishes their promised
efficiency as developers must spend significant effort evaluating and debugging
the generated code. To improve the reliability and quality of the generated
codes, researchers propose to leverage Consistency to obtain a better code
based on generating and ranking multiple candidates. The existing approach is
problematic as Consistency thinks a code is better when (1) the code pass more
tests (inter-consistency) (2) more codes share the same behavior
(intra-consistency). However, because the tests are also generated by LLMs,
they could be wrong as well. As a result, majority voting based on testing
results is unreliable. Relying solely on consistency is insufficient to address
this issue; integrating user feedback is essential for effectively guiding
consistency. We show that with minimal human effort, performance can be
significantly enhanced. We propose Consistency-Augmented Iterative Interaction
Framework to Enhance the Reliability of Code Generation, ConAIR, which is an
approach that aims to improve the performance of a code generator through two
distinctive ingredients, i.e., (1) lightweight user effort for validating the
correctness of selected tests; and (2) a dynamic strategy for ranking,
localizing and correcting multiple tests and codes. Overall, we propose a
lightweight interaction framework that incorporates user feedback to correct
identified tests and guide the iterative process. The iteration rounds are only
4 in average with the help of consistency. With only lightweight human efforts,
we can achieve an improvement of 33% towards the base model. |
---|---|
DOI: | 10.48550/arxiv.2411.15587 |