Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers

This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augme...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Qi, Zhenting, Ma, Mingyuan, Xu, Jiahang, Zhang, Li Lyna, Yang, Fan, Yang, Mao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Qi, Zhenting
Ma, Mingyuan
Xu, Jiahang
Zhang, Li Lyna
Yang, Fan
Yang, Mao
description This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B, from 36.46% to 81.88% for Mistral-7B, from 74.53% to 91.13% for LLaMA3-8B-Instruct. Code will be available at https://github.com/zhentingqi/rStar.
doi_str_mv 10.48550/arxiv.2408.06195
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2408_06195</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2408_06195</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2408_061953</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwM7Q05WSw8S0tKU3MUQhKTSzOz8vMS1fwTcxOLVYIzk3MyUktUvDx8QVySory89KBvICi_KSc1Fzd4PycstSiYh4G1rTEnOJUXijNzSDv5hri7KELtie-oCgzN7GoMh5kXzzYPmPCKgDnujUQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers</title><source>arXiv.org</source><creator>Qi, Zhenting ; Ma, Mingyuan ; Xu, Jiahang ; Zhang, Li Lyna ; Yang, Fan ; Yang, Mao</creator><creatorcontrib>Qi, Zhenting ; Ma, Mingyuan ; Xu, Jiahang ; Zhang, Li Lyna ; Yang, Fan ; Yang, Mao</creatorcontrib><description>This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B, from 36.46% to 81.88% for Mistral-7B, from 74.53% to 91.13% for LLaMA3-8B-Instruct. Code will be available at https://github.com/zhentingqi/rStar.</description><identifier>DOI: 10.48550/arxiv.2408.06195</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2024-08</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2408.06195$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2408.06195$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Qi, Zhenting</creatorcontrib><creatorcontrib>Ma, Mingyuan</creatorcontrib><creatorcontrib>Xu, Jiahang</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Yang, Mao</creatorcontrib><title>Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers</title><description>This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B, from 36.46% to 81.88% for Mistral-7B, from 74.53% to 91.13% for LLaMA3-8B-Instruct. Code will be available at https://github.com/zhentingqi/rStar.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjGw0DMwM7Q05WSw8S0tKU3MUQhKTSzOz8vMS1fwTcxOLVYIzk3MyUktUvDx8QVySory89KBvICi_KSc1Fzd4PycstSiYh4G1rTEnOJUXijNzSDv5hri7KELtie-oCgzN7GoMh5kXzzYPmPCKgDnujUQ</recordid><startdate>20240812</startdate><enddate>20240812</enddate><creator>Qi, Zhenting</creator><creator>Ma, Mingyuan</creator><creator>Xu, Jiahang</creator><creator>Zhang, Li Lyna</creator><creator>Yang, Fan</creator><creator>Yang, Mao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240812</creationdate><title>Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers</title><author>Qi, Zhenting ; Ma, Mingyuan ; Xu, Jiahang ; Zhang, Li Lyna ; Yang, Fan ; Yang, Mao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2408_061953</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Qi, Zhenting</creatorcontrib><creatorcontrib>Ma, Mingyuan</creatorcontrib><creatorcontrib>Xu, Jiahang</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Yang, Mao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Qi, Zhenting</au><au>Ma, Mingyuan</au><au>Xu, Jiahang</au><au>Zhang, Li Lyna</au><au>Yang, Fan</au><au>Yang, Mao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers</atitle><date>2024-08-12</date><risdate>2024</risdate><abstract>This paper introduces rStar, a self-play mutual reasoning approach that significantly improves reasoning capabilities of small language models (SLMs) without fine-tuning or superior models. rStar decouples reasoning into a self-play mutual generation-discrimination process. First, a target SLM augments the Monte Carlo Tree Search (MCTS) with a rich set of human-like reasoning actions to construct higher quality reasoning trajectories. Next, another SLM, with capabilities similar to the target SLM, acts as a discriminator to verify each trajectory generated by the target SLM. The mutually agreed reasoning trajectories are considered mutual consistent, thus are more likely to be correct. Extensive experiments across five SLMs demonstrate rStar can effectively solve diverse reasoning problems, including GSM8K, GSM-Hard, MATH, SVAMP, and StrategyQA. Remarkably, rStar boosts GSM8K accuracy from 12.51% to 63.91% for LLaMA2-7B, from 36.46% to 81.88% for Mistral-7B, from 74.53% to 91.13% for LLaMA3-8B-Instruct. Code will be available at https://github.com/zhentingqi/rStar.</abstract><doi>10.48550/arxiv.2408.06195</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2408.06195
ispartof
issn
language eng
recordid cdi_arxiv_primary_2408_06195
source arXiv.org
subjects Computer Science - Computation and Language
title Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A00%3A14IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Mutual%20Reasoning%20Makes%20Smaller%20LLMs%20Stronger%20Problem-Solvers&rft.au=Qi,%20Zhenting&rft.date=2024-08-12&rft_id=info:doi/10.48550/arxiv.2408.06195&rft_dat=%3Carxiv_GOX%3E2408_06195%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true