Answering Questions by Meta-Reasoning over Multiple Chains of Thought
Modern systems for multi-hop question answering (QA) typically break questions into a sequence of reasoning steps, termed chain-of-thought (CoT), before arriving at a final answer. Often, multiple chains are sampled and aggregated through a voting mechanism over the final answers, but the intermedia...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Modern systems for multi-hop question answering (QA) typically break
questions into a sequence of reasoning steps, termed chain-of-thought (CoT),
before arriving at a final answer. Often, multiple chains are sampled and
aggregated through a voting mechanism over the final answers, but the
intermediate steps themselves are discarded. While such approaches improve
performance, they do not consider the relations between intermediate steps
across chains and do not provide a unified explanation for the predicted
answer. We introduce Multi-Chain Reasoning (MCR), an approach which prompts
large language models to meta-reason over multiple chains of thought, rather
than aggregating their answers. MCR examines different reasoning chains, mixes
information between them and selects the most relevant facts in generating an
explanation and predicting the answer. MCR outperforms strong baselines on 7
multi-hop QA datasets. Moreover, our analysis reveals that MCR explanations
exhibit high quality, enabling humans to verify its answers. |
---|---|
DOI: | 10.48550/arxiv.2304.13007 |