Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal Examples

The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to mult...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-10
Hauptverfasser: Yu, Fangxu, Lai, Jiang, Kang, Haoqiang, Hao, Shibo, Qin, Lianhui
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Yu, Fangxu
Lai, Jiang
Kang, Haoqiang
Hao, Shibo
Qin, Lianhui
description The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning can improve LLM reasoning quality, but requires extensive supervised data to capture the full range of possible solutions. Reinforcement learning aims to find limited highest-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample diverse reasoning paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across five challenging puzzle-solving tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), and PrOntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3066580320</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3066580320</sourcerecordid><originalsourceid>FETCH-proquest_journals_30665803203</originalsourceid><addsrcrecordid>eNqNyt0KgjAYxvERBEl5Dy90LKwtTTotpQOFSM9lwbTJ9LXNjy4_hS6go-cPz29FHMb5wQuPjG2Ia21NKWXBifk-d0gWa5wAS3hIYbFVbXXOjVBLQJKkFko0cFWjNJVse7gbfGrZQIZ6XMik-hekM2-Ehugjmk5LuyPrUmgr3d9uyT6O8svN6wy-B2n7osbBtPNVcBoEfkg5o_w_9QXzDz-f</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3066580320</pqid></control><display><type>article</type><title>Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal Examples</title><source>Free E- Journals</source><creator>Yu, Fangxu ; Lai, Jiang ; Kang, Haoqiang ; Hao, Shibo ; Qin, Lianhui</creator><creatorcontrib>Yu, Fangxu ; Lai, Jiang ; Kang, Haoqiang ; Hao, Shibo ; Qin, Lianhui</creatorcontrib><description>The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning can improve LLM reasoning quality, but requires extensive supervised data to capture the full range of possible solutions. Reinforcement learning aims to find limited highest-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample diverse reasoning paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across five challenging puzzle-solving tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), and PrOntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Cognition &amp; reasoning ; Data augmentation ; Large language models ; Machine learning ; Problem solving ; Reasoning</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Yu, Fangxu</creatorcontrib><creatorcontrib>Lai, Jiang</creatorcontrib><creatorcontrib>Kang, Haoqiang</creatorcontrib><creatorcontrib>Hao, Shibo</creatorcontrib><creatorcontrib>Qin, Lianhui</creatorcontrib><title>Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal Examples</title><title>arXiv.org</title><description>The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning can improve LLM reasoning quality, but requires extensive supervised data to capture the full range of possible solutions. Reinforcement learning aims to find limited highest-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample diverse reasoning paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across five challenging puzzle-solving tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), and PrOntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.</description><subject>Cognition &amp; reasoning</subject><subject>Data augmentation</subject><subject>Large language models</subject><subject>Machine learning</subject><subject>Problem solving</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyt0KgjAYxvERBEl5Dy90LKwtTTotpQOFSM9lwbTJ9LXNjy4_hS6go-cPz29FHMb5wQuPjG2Ia21NKWXBifk-d0gWa5wAS3hIYbFVbXXOjVBLQJKkFko0cFWjNJVse7gbfGrZQIZ6XMik-hekM2-Ehugjmk5LuyPrUmgr3d9uyT6O8svN6wy-B2n7osbBtPNVcBoEfkg5o_w_9QXzDz-f</recordid><startdate>20241004</startdate><enddate>20241004</enddate><creator>Yu, Fangxu</creator><creator>Lai, Jiang</creator><creator>Kang, Haoqiang</creator><creator>Hao, Shibo</creator><creator>Qin, Lianhui</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241004</creationdate><title>Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal Examples</title><author>Yu, Fangxu ; Lai, Jiang ; Kang, Haoqiang ; Hao, Shibo ; Qin, Lianhui</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30665803203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Cognition &amp; reasoning</topic><topic>Data augmentation</topic><topic>Large language models</topic><topic>Machine learning</topic><topic>Problem solving</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Yu, Fangxu</creatorcontrib><creatorcontrib>Lai, Jiang</creatorcontrib><creatorcontrib>Kang, Haoqiang</creatorcontrib><creatorcontrib>Hao, Shibo</creatorcontrib><creatorcontrib>Qin, Lianhui</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Yu, Fangxu</au><au>Lai, Jiang</au><au>Kang, Haoqiang</au><au>Hao, Shibo</au><au>Qin, Lianhui</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal Examples</atitle><jtitle>arXiv.org</jtitle><date>2024-10-04</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>The ability to generate diverse solutions to a given problem is a hallmark of human creativity. This divergent reasoning is also crucial for machines, enhancing their robustness and enabling them to assist humans in many applications such as scientific discovery. However, existing approaches to multi-step reasoning with large language models (LLMs) have mostly focused only on reasoning accuracy, without further discovering more diverse valid solutions. For example, supervised fine-tuning can improve LLM reasoning quality, but requires extensive supervised data to capture the full range of possible solutions. Reinforcement learning aims to find limited highest-reward solutions while neglecting the solution diversity. To fill this gap, we propose Flow of Reasoning (FoR), an efficient diversity-seeking LLM finetuning method aimed at improving reasoning quality and diversity with minimal data. FoR formulates multi-step LLM reasoning as a Markovian flow on a DAG-structured reasoning graph. This formulation allows us to incorporate and adapt principled GFlowNet approaches, for finetuning LLMs to sample diverse reasoning paths with probabilities proportional to the (unnormalized) reward of target problems. Extensive experiments show that, with limited training examples (e.g., 15 examples), FoR enables the discovery of diverse, creative, high-quality solutions, greatly outperforming a wide range of existing inference and training methods across five challenging puzzle-solving tasks, including BlocksWorld (embodied reasoning), Game24 (math puzzle solving), Rubik's Cube (spatial reasoning), 1D-ARC (abstraction reasoning), and PrOntoQA (logical reasoning). Code is available at https://github.com/Yu-Fangxu/FoR.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-10
issn 2331-8422
language eng
recordid cdi_proquest_journals_3066580320
source Free E- Journals
subjects Cognition & reasoning
Data augmentation
Large language models
Machine learning
Problem solving
Reasoning
title Flow of Reasoning:Training LLMs for Divergent Problem Solving with Minimal Examples
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-23T00%3A56%3A03IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Flow%20of%20Reasoning:Training%20LLMs%20for%20Divergent%20Problem%20Solving%20with%20Minimal%20Examples&rft.jtitle=arXiv.org&rft.au=Yu,%20Fangxu&rft.date=2024-10-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3066580320%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3066580320&rft_id=info:pmid/&rfr_iscdi=true