Bi-Chainer: Automated Large Language Models Reasoning with Bidirectional Chaining
Large Language Models (LLMs) have shown human-like reasoning abilities but still face challenges in solving complex logical problems. Existing unidirectional chaining methods, such as forward chaining and backward chaining, suffer from issues like low prediction accuracy and efficiency. To address t...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Large Language Models (LLMs) have shown human-like reasoning abilities but
still face challenges in solving complex logical problems. Existing
unidirectional chaining methods, such as forward chaining and backward
chaining, suffer from issues like low prediction accuracy and efficiency. To
address these, we propose a bidirectional chaining method, Bi-Chainer, which
dynamically switches to depth-first reasoning in the opposite reasoning
direction when it encounters multiple branching options within the current
direction. Thus, the intermediate reasoning results can be utilized as guidance
to facilitate the reasoning process. We show that Bi-Chainer achieves sizable
accuracy boots over unidirectional chaining frameworks on four challenging
logical reasoning datasets. Moreover, Bi-Chainer enhances the accuracy of
intermediate proof steps and reduces the average number of inference calls,
resulting in more efficient and accurate reasoning. |
---|---|
DOI: | 10.48550/arxiv.2406.06586 |