Enhancing Reasoning through Process Supervision with Monte Carlo Tree Search
Large language models (LLMs) have demonstrated their remarkable capacity across a variety of tasks. However, reasoning remains a challenge for LLMs. To improve LLMs' reasoning ability, process supervision has proven to be better than outcome supervision. In this work, we study using Monte Carlo...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) have demonstrated their remarkable capacity
across a variety of tasks. However, reasoning remains a challenge for LLMs. To
improve LLMs' reasoning ability, process supervision has proven to be better
than outcome supervision. In this work, we study using Monte Carlo Tree Search
(MCTS) to generate process supervision data with LLMs themselves for training
them. We sample reasoning steps with an LLM and assign each step a score that
captures its "relative correctness," and the LLM is then trained by minimizing
weighted log-likelihood of generating the reasoning steps. This
generate-then-train process is repeated iteratively until convergence.Our
experimental results demonstrate that the proposed methods considerably improve
the performance of LLMs on two mathematical reasoning datasets. Furthermore,
models trained on one dataset also exhibit improved performance on the other,
showing the transferability of the enhanced reasoning ability. |
---|---|
DOI: | 10.48550/arxiv.2501.01478 |