CodeComplex: Dataset for Worst-Case Time Complexity Prediction
Reasoning ability of Large Language Models (LLMs) is a crucial ability, especially in complex decision-making tasks. One significant task to show LLMs' reasoning capability is code time complexity prediction, which involves various intricate factors such as the input range of variables and cond...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reasoning ability of Large Language Models (LLMs) is a crucial ability,
especially in complex decision-making tasks. One significant task to show LLMs'
reasoning capability is code time complexity prediction, which involves various
intricate factors such as the input range of variables and conditional loops.
Current benchmarks fall short of providing a rigorous assessment due to limited
data, language constraints, and insufficient labeling. They do not consider
time complexity based on input representation and merely evaluate whether
predictions fall into the same class, lacking a measure of how close incorrect
predictions are to the correct ones. To address these dependencies, we
introduce CodeComplex, the first robust and extensive dataset designed to
evaluate LLMs' reasoning abilities in predicting code time complexity.
CodeComplex comprises 4,900 Java codes and an equivalent number of Python
codes, overcoming language and labeling constraints, carefully annotated with
complexity labels based on input characteristics by a panel of algorithmic
experts. Additionally, we propose specialized evaluation metrics for the
reasoning of complexity prediction tasks, offering a more precise and reliable
assessment of LLMs' reasoning capabilities. We release our dataset
(https://github.com/sybaik1/CodeComplex-Data) and baseline models
(https://github.com/sybaik1/CodeComplex-Models) publicly to encourage the
relevant (NLP, SE, and PL) communities to utilize and participate in this
research. |
---|---|
DOI: | 10.48550/arxiv.2401.08719 |