Demystifying Faulty Code with LLM: Step-by-Step Reasoning for Explainable Fault Localization
Fault localization is a critical process that involves identifying specific program elements responsible for program failures. Manually pinpointing these elements, such as classes, methods, or statements, which are associated with a fault is laborious and time-consuming. To overcome this challenge,...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Fault localization is a critical process that involves identifying specific
program elements responsible for program failures. Manually pinpointing these
elements, such as classes, methods, or statements, which are associated with a
fault is laborious and time-consuming. To overcome this challenge, various
fault localization tools have been developed. These tools typically generate a
ranked list of suspicious program elements. However, this information alone is
insufficient. A prior study emphasized that automated fault localization should
offer a rationale.
In this study, we investigate the step-by-step reasoning for explainable
fault localization. We explore the potential of Large Language Models (LLM) in
assisting developers in reasoning about code. We proposed FuseFL that utilizes
several combinations of information to enhance the LLM results which are
spectrum-based fault localization results, test case execution outcomes, and
code description (i.e., explanation of what the given code is intended to do).
We conducted our investigation using faulty code from Refactory dataset. First,
we evaluate the performance of the automated fault localization. Our results
demonstrate a more than 30% increase in the number of successfully localized
faults at Top-1 compared to the baseline. To evaluate the explanations
generated by FuseFL, we create a dataset of human explanations that provide
step-by-step reasoning as to why specific lines of code are considered faulty.
This dataset consists of 324 faulty code files, along with explanations for 600
faulty lines. Furthermore, we also conducted human studies to evaluate the
explanations. We found that for 22 out of the 30 randomly sampled cases, FuseFL
generated correct explanations. |
---|---|
DOI: | 10.48550/arxiv.2403.10507 |