CONGRA: Benchmarking Automatic Conflict Resolution
Resolving conflicts from merging different software versions is a challenging task. To reduce the overhead of manual merging, researchers develop various program analysis-based tools which only solve specific types of conflicts and have a limited scope of application. With the development of languag...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Resolving conflicts from merging different software versions is a challenging
task. To reduce the overhead of manual merging, researchers develop various
program analysis-based tools which only solve specific types of conflicts and
have a limited scope of application. With the development of language models,
researchers treat conflict code as text, which theoretically allows for
addressing almost all types of conflicts. However, the absence of effective
conflict difficulty grading methods hinders a comprehensive evaluation of large
language models (LLMs), making it difficult to gain a deeper understanding of
their limitations. Furthermore, there is a notable lack of large-scale open
benchmarks for evaluating the performance of LLMs in automatic conflict
resolution. To address these issues, we introduce ConGra, a CONflict-GRAded
benchmarking scheme designed to evaluate the performance of software merging
tools under varying complexity conflict scenarios. We propose a novel approach
to classify conflicts based on code operations and use it to build a
large-scale evaluation dataset based on 44,948 conflicts from 34 real-world
projects. We evaluate state-of-the-art LLMs on conflict resolution tasks using
this dataset. By employing the dataset, we assess the performance of multiple
state-of-the-art LLMs and code LLMs, ultimately uncovering two counterintuitive
yet insightful phenomena. ConGra will be released at
https://github.com/HKU-System-Security-Lab/ConGra. |
---|---|
DOI: | 10.48550/arxiv.2409.14121 |