Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning
Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks. However, the auto-regressive generation process makes LLMs prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning. In this paper, by casting multi-st...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have demonstrated impressive capability in many
natural language tasks. However, the auto-regressive generation process makes
LLMs prone to produce errors, hallucinations and inconsistent statements when
performing multi-step reasoning. In this paper, by casting multi-step reasoning
of LLMs as a heuristic search problem, we aim to alleviate the pathology by
introducing Q*, a general, versatile and agile framework for guiding LLMs
decoding process with deliberative planning. By learning a plug-and-play
Q-value model as heuristic function for estimating expected future rewards, our
Q* can effectively guide LLMs to select the most promising next reasoning step
without fine-tuning LLMs for the current task, which avoids the significant
computational overhead and potential risk of performance degeneration on other
tasks. Extensive experiments on GSM8K, MATH and MBPP demonstrate the
superiority of our method, contributing to improving the reasoning performance
of existing open-source LLMs. |
---|---|
DOI: | 10.48550/arxiv.2406.14283 |