Crystal: Introspective Reasoners Reinforced with Self-Feedback
Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized. However, existing implementations, including "ch...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Extensive work has shown that the performance and interpretability of
commonsense reasoning can be improved via knowledge-augmented reasoning
methods, where the knowledge that underpins the reasoning process is explicitly
verbalized and utilized. However, existing implementations, including
"chain-of-thought" and its variants, fall short in capturing the introspective
nature of knowledge required in commonsense reasoning, and in accounting for
the mutual adaptation between the generation and utilization of knowledge. We
propose a novel method to develop an introspective commonsense reasoner,
Crystal. To tackle commonsense problems, it first introspects for knowledge
statements related to the given question, and subsequently makes an informed
prediction that is grounded in the previously introspected knowledge. The
knowledge introspection and knowledge-grounded reasoning modes of the model are
tuned via reinforcement learning to mutually adapt, where the reward derives
from the feedback given by the model itself. Experiments show that Crystal
significantly outperforms both the standard supervised finetuning and
chain-of-thought distilled methods, and enhances the transparency of the
commonsense reasoning process. Our work ultimately validates the feasibility
and potential of reinforcing a neural model with self-feedback. |
---|---|
DOI: | 10.48550/arxiv.2310.04921 |