Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning

Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks. However, the auto-regressive generation process makes LLMs prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning. In this paper, by casting multi-st...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Wang, Chaojie, Deng, Yanchen, Lyu, Zhiyi, Zeng, Liang, He, Jujie, Yan, Shuicheng, An, Bo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Wang, Chaojie
Deng, Yanchen
Lyu, Zhiyi
Zeng, Liang
He, Jujie
Yan, Shuicheng
An, Bo
description Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks. However, the auto-regressive generation process makes LLMs prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning. In this paper, by casting multi-step reasoning of LLMs as a heuristic search problem, we aim to alleviate the pathology by introducing Q*, a general, versatile and agile framework for guiding LLMs decoding process with deliberative planning. By learning a plug-and-play Q-value model as heuristic function for estimating expected future rewards, our Q* can effectively guide LLMs to select the most promising next reasoning step without fine-tuning LLMs for the current task, which avoids the significant computational overhead and potential risk of performance degeneration on other tasks. Extensive experiments on GSM8K, MATH and MBPP demonstrate the superiority of our method, contributing to improving the reasoning performance of existing open-source LLMs.
doi_str_mv 10.48550/arxiv.2406.14283
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2406_14283</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2406_14283</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-1250d65ea63d9cf91e0b3c5e35b627cef82816e1ff045e38c21f23346919d8873</originalsourceid><addsrcrecordid>eNotj8tugzAUBb3pokrzAV3VPwC1fW1juqvSvCSiPpQ9MnDdWiKADCHJ37ckXR1pdDTSEPLIWSyNUuzZhrMfYyGZjrkUBu7J8vOFbg9daEfffNPdsR581A_Y0S-0fdtM0LWBZtmupyc__NA3rH2BwQ5-RPpR22b6PJA7Z-se5_87I_vVcr_YRNn7ert4zSKrE4i4UKzSCq2GKi1dypEVUCoEVWiRlOiMMFwjd47JP2pKwZ0AkDrlaWVMAjPydNNeO_Iu-IMNl3zqya898AsxJkSL</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning</title><source>arXiv.org</source><creator>Wang, Chaojie ; Deng, Yanchen ; Lyu, Zhiyi ; Zeng, Liang ; He, Jujie ; Yan, Shuicheng ; An, Bo</creator><creatorcontrib>Wang, Chaojie ; Deng, Yanchen ; Lyu, Zhiyi ; Zeng, Liang ; He, Jujie ; Yan, Shuicheng ; An, Bo</creatorcontrib><description>Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks. However, the auto-regressive generation process makes LLMs prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning. In this paper, by casting multi-step reasoning of LLMs as a heuristic search problem, we aim to alleviate the pathology by introducing Q*, a general, versatile and agile framework for guiding LLMs decoding process with deliberative planning. By learning a plug-and-play Q-value model as heuristic function for estimating expected future rewards, our Q* can effectively guide LLMs to select the most promising next reasoning step without fine-tuning LLMs for the current task, which avoids the significant computational overhead and potential risk of performance degeneration on other tasks. Extensive experiments on GSM8K, MATH and MBPP demonstrate the superiority of our method, contributing to improving the reasoning performance of existing open-source LLMs.</description><identifier>DOI: 10.48550/arxiv.2406.14283</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence</subject><creationdate>2024-06</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2406.14283$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2406.14283$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Chaojie</creatorcontrib><creatorcontrib>Deng, Yanchen</creatorcontrib><creatorcontrib>Lyu, Zhiyi</creatorcontrib><creatorcontrib>Zeng, Liang</creatorcontrib><creatorcontrib>He, Jujie</creatorcontrib><creatorcontrib>Yan, Shuicheng</creatorcontrib><creatorcontrib>An, Bo</creatorcontrib><title>Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning</title><description>Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks. However, the auto-regressive generation process makes LLMs prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning. In this paper, by casting multi-step reasoning of LLMs as a heuristic search problem, we aim to alleviate the pathology by introducing Q*, a general, versatile and agile framework for guiding LLMs decoding process with deliberative planning. By learning a plug-and-play Q-value model as heuristic function for estimating expected future rewards, our Q* can effectively guide LLMs to select the most promising next reasoning step without fine-tuning LLMs for the current task, which avoids the significant computational overhead and potential risk of performance degeneration on other tasks. Extensive experiments on GSM8K, MATH and MBPP demonstrate the superiority of our method, contributing to improving the reasoning performance of existing open-source LLMs.</description><subject>Computer Science - Artificial Intelligence</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tugzAUBb3pokrzAV3VPwC1fW1juqvSvCSiPpQ9MnDdWiKADCHJ37ckXR1pdDTSEPLIWSyNUuzZhrMfYyGZjrkUBu7J8vOFbg9daEfffNPdsR581A_Y0S-0fdtM0LWBZtmupyc__NA3rH2BwQ5-RPpR22b6PJA7Z-se5_87I_vVcr_YRNn7ert4zSKrE4i4UKzSCq2GKi1dypEVUCoEVWiRlOiMMFwjd47JP2pKwZ0AkDrlaWVMAjPydNNeO_Iu-IMNl3zqya898AsxJkSL</recordid><startdate>20240620</startdate><enddate>20240620</enddate><creator>Wang, Chaojie</creator><creator>Deng, Yanchen</creator><creator>Lyu, Zhiyi</creator><creator>Zeng, Liang</creator><creator>He, Jujie</creator><creator>Yan, Shuicheng</creator><creator>An, Bo</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240620</creationdate><title>Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning</title><author>Wang, Chaojie ; Deng, Yanchen ; Lyu, Zhiyi ; Zeng, Liang ; He, Jujie ; Yan, Shuicheng ; An, Bo</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-1250d65ea63d9cf91e0b3c5e35b627cef82816e1ff045e38c21f23346919d8873</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Chaojie</creatorcontrib><creatorcontrib>Deng, Yanchen</creatorcontrib><creatorcontrib>Lyu, Zhiyi</creatorcontrib><creatorcontrib>Zeng, Liang</creatorcontrib><creatorcontrib>He, Jujie</creatorcontrib><creatorcontrib>Yan, Shuicheng</creatorcontrib><creatorcontrib>An, Bo</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Chaojie</au><au>Deng, Yanchen</au><au>Lyu, Zhiyi</au><au>Zeng, Liang</au><au>He, Jujie</au><au>Yan, Shuicheng</au><au>An, Bo</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning</atitle><date>2024-06-20</date><risdate>2024</risdate><abstract>Large Language Models (LLMs) have demonstrated impressive capability in many natural language tasks. However, the auto-regressive generation process makes LLMs prone to produce errors, hallucinations and inconsistent statements when performing multi-step reasoning. In this paper, by casting multi-step reasoning of LLMs as a heuristic search problem, we aim to alleviate the pathology by introducing Q*, a general, versatile and agile framework for guiding LLMs decoding process with deliberative planning. By learning a plug-and-play Q-value model as heuristic function for estimating expected future rewards, our Q* can effectively guide LLMs to select the most promising next reasoning step without fine-tuning LLMs for the current task, which avoids the significant computational overhead and potential risk of performance degeneration on other tasks. Extensive experiments on GSM8K, MATH and MBPP demonstrate the superiority of our method, contributing to improving the reasoning performance of existing open-source LLMs.</abstract><doi>10.48550/arxiv.2406.14283</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2406.14283
ispartof
issn
language eng
recordid cdi_arxiv_primary_2406_14283
source arXiv.org
subjects Computer Science - Artificial Intelligence
title Q: Improving Multi-step Reasoning for LLMs with Deliberative Planning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-16T18%3A34%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Q:%20Improving%20Multi-step%20Reasoning%20for%20LLMs%20with%20Deliberative%20Planning&rft.au=Wang,%20Chaojie&rft.date=2024-06-20&rft_id=info:doi/10.48550/arxiv.2406.14283&rft_dat=%3Carxiv_GOX%3E2406_14283%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true