Guided Stream of Search: Learning to Better Search with Language Models via Optimal Path Guidance
While language models have demonstrated impressive capabilities across a range of tasks, they still struggle with tasks that require complex planning and reasoning. Recent studies have proposed training language models on search processes rather than optimal solutions, resulting in better generaliza...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | While language models have demonstrated impressive capabilities across a
range of tasks, they still struggle with tasks that require complex planning
and reasoning. Recent studies have proposed training language models on search
processes rather than optimal solutions, resulting in better generalization
performance even though search processes are noisy and even suboptimal.
However, these studies overlook the value of optimal solutions, which can serve
as step-by-step landmarks to guide more effective search. In this work, we
explore how to leverage optimal solutions to enhance the search and planning
abilities of language models. To this end, we propose guided stream of search
(GSoS), which seamlessly incorporates optimal solutions into the
self-generation process in a progressive manner, producing high-quality search
trajectories. These trajectories are then distilled into the pre-trained model
via supervised fine-tuning. Our approach significantly enhances the search and
planning abilities of language models on Countdown, a simple yet challenging
mathematical reasoning task. Notably, combining our method with RL fine-tuning
yields further improvements, whereas previous supervised fine-tuning methods do
not benefit from RL. Furthermore, our approach exhibits greater effectiveness
than leveraging optimal solutions in the form of subgoal rewards. |
---|---|
DOI: | 10.48550/arxiv.2410.02992 |