Stream of Search (SoS): Learning to Search in Language

Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to sear...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Gandhi, Kanishk, Lee, Denise, Grand, Gabriel, Liu, Muxin, Cheng, Winson, Sharma, Archit, Goodman, Noah D
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Gandhi, Kanishk
Lee, Denise
Grand, Gabriel
Liu, Muxin
Cheng, Winson
Sharma, Archit
Goodman, Noah D
description Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones.
doi_str_mv 10.48550/arxiv.2404.03683
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2404_03683</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2404_03683</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-9adaac6826d5ec0ebe7b4907aeb08736483d74a1a730d89f6b4250daf723fb4b3</originalsourceid><addsrcrecordid>eNo1jz1vwjAURb0wVIEf0AmPMCR95Dm20w2hfiBZYkj26Dm2QyRIkAuo_fdtaZmurq50dA9jjyvIhC4KeKL42V-zXIDIAKXGByarc_R05GPglafY7vmiGqvlMzc_beiHjp_H-9IP3NDQXajzUzYJdPjws_9MWP36Um_eU7N7227WJiWpMC3JEbVS59IVvgVvvbKiBEXeglYohUanBK1IIThdBmlFXoCjoHIMVlhM2PwPezvenGJ_pPjV_Ao0NwH8BgwLPuI</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Stream of Search (SoS): Learning to Search in Language</title><source>arXiv.org</source><creator>Gandhi, Kanishk ; Lee, Denise ; Grand, Gabriel ; Liu, Muxin ; Cheng, Winson ; Sharma, Archit ; Goodman, Noah D</creator><creatorcontrib>Gandhi, Kanishk ; Lee, Denise ; Grand, Gabriel ; Liu, Muxin ; Cheng, Winson ; Sharma, Archit ; Goodman, Noah D</creatorcontrib><description>Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones.</description><identifier>DOI: 10.48550/arxiv.2404.03683</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-04</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2404.03683$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2404.03683$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Gandhi, Kanishk</creatorcontrib><creatorcontrib>Lee, Denise</creatorcontrib><creatorcontrib>Grand, Gabriel</creatorcontrib><creatorcontrib>Liu, Muxin</creatorcontrib><creatorcontrib>Cheng, Winson</creatorcontrib><creatorcontrib>Sharma, Archit</creatorcontrib><creatorcontrib>Goodman, Noah D</creatorcontrib><title>Stream of Search (SoS): Learning to Search in Language</title><description>Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1jz1vwjAURb0wVIEf0AmPMCR95Dm20w2hfiBZYkj26Dm2QyRIkAuo_fdtaZmurq50dA9jjyvIhC4KeKL42V-zXIDIAKXGByarc_R05GPglafY7vmiGqvlMzc_beiHjp_H-9IP3NDQXajzUzYJdPjws_9MWP36Um_eU7N7227WJiWpMC3JEbVS59IVvgVvvbKiBEXeglYohUanBK1IIThdBmlFXoCjoHIMVlhM2PwPezvenGJ_pPjV_Ao0NwH8BgwLPuI</recordid><startdate>20240401</startdate><enddate>20240401</enddate><creator>Gandhi, Kanishk</creator><creator>Lee, Denise</creator><creator>Grand, Gabriel</creator><creator>Liu, Muxin</creator><creator>Cheng, Winson</creator><creator>Sharma, Archit</creator><creator>Goodman, Noah D</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240401</creationdate><title>Stream of Search (SoS): Learning to Search in Language</title><author>Gandhi, Kanishk ; Lee, Denise ; Grand, Gabriel ; Liu, Muxin ; Cheng, Winson ; Sharma, Archit ; Goodman, Noah D</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-9adaac6826d5ec0ebe7b4907aeb08736483d74a1a730d89f6b4250daf723fb4b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Gandhi, Kanishk</creatorcontrib><creatorcontrib>Lee, Denise</creatorcontrib><creatorcontrib>Grand, Gabriel</creatorcontrib><creatorcontrib>Liu, Muxin</creatorcontrib><creatorcontrib>Cheng, Winson</creatorcontrib><creatorcontrib>Sharma, Archit</creatorcontrib><creatorcontrib>Goodman, Noah D</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Gandhi, Kanishk</au><au>Lee, Denise</au><au>Grand, Gabriel</au><au>Liu, Muxin</au><au>Cheng, Winson</au><au>Sharma, Archit</au><au>Goodman, Noah D</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Stream of Search (SoS): Learning to Search in Language</atitle><date>2024-04-01</date><risdate>2024</risdate><abstract>Language models are rarely shown fruitful mistakes while training. They then struggle to look beyond the next token, suffering from a snowballing of errors and struggling to predict the consequence of their actions several steps ahead. In this paper, we show how language models can be taught to search by representing the process of search in language, as a flattened string -- a stream of search (SoS). We propose a unified language for search that captures an array of different symbolic search strategies. We demonstrate our approach using the simple yet difficult game of Countdown, where the goal is to combine input numbers with arithmetic operations to reach a target number. We pretrain a transformer-based language model from scratch on a dataset of streams of search generated by heuristic solvers. We find that SoS pretraining increases search accuracy by 25% over models trained to predict only the optimal search trajectory. We further finetune this model with two policy improvement methods: Advantage-Induced Policy Alignment (APA) and Self-Taught Reasoner (STaR). The finetuned SoS models solve 36% of previously unsolved problems, including problems that cannot be solved by any of the heuristic solvers. Our results indicate that language models can learn to solve problems via search, self-improve to flexibly use different search strategies, and potentially discover new ones.</abstract><doi>10.48550/arxiv.2404.03683</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2404.03683
ispartof
issn
language eng
recordid cdi_arxiv_primary_2404_03683
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Stream of Search (SoS): Learning to Search in Language
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T16%3A43%3A15IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Stream%20of%20Search%20(SoS):%20Learning%20to%20Search%20in%20Language&rft.au=Gandhi,%20Kanishk&rft.date=2024-04-01&rft_id=info:doi/10.48550/arxiv.2404.03683&rft_dat=%3Carxiv_GOX%3E2404_03683%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true