Enhancing Chain-of-Thoughts Prompting with Iterative Bootstrapping in Large Language Models
Large language models (LLMs) can achieve highly effective performance on various reasoning tasks by incorporating step-by-step chain-of-thought (CoT) prompting as demonstrations. However, the reasoning chains of demonstrations generated by LLMs are prone to errors, which can subsequently lead to inc...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large language models (LLMs) can achieve highly effective performance on
various reasoning tasks by incorporating step-by-step chain-of-thought (CoT)
prompting as demonstrations. However, the reasoning chains of demonstrations
generated by LLMs are prone to errors, which can subsequently lead to incorrect
reasoning during inference. Furthermore, inappropriate exemplars (overly
simplistic or complex), can affect overall performance among varying levels of
difficulty. We introduce Iter-CoT (Iterative bootstrapping in Chain-of-Thoughts
Prompting), an iterative bootstrapping approach for selecting exemplars and
generating reasoning chains. By utilizing iterative bootstrapping, our approach
enables LLMs to autonomously rectify errors, resulting in more precise and
comprehensive reasoning chains. Simultaneously, our approach selects
challenging yet answerable questions accompanied by reasoning chains as
exemplars with a moderate level of difficulty, which enhances the LLMs'
generalizability across varying levels of difficulty. Experimental results
indicate that Iter-CoT exhibits superiority, achieving competitive performance
across three distinct reasoning tasks on ten datasets. |
---|---|
DOI: | 10.48550/arxiv.2304.11657 |