Crystal: Introspective Reasoners Reinforced with Self-Feedback

Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized. However, existing implementations, including "ch...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Liu, Jiacheng, Pasunuru, Ramakanth, Hajishirzi, Hannaneh, Choi, Yejin, Celikyilmaz, Asli
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Liu, Jiacheng
Pasunuru, Ramakanth
Hajishirzi, Hannaneh
Choi, Yejin
Celikyilmaz, Asli
description Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized. However, existing implementations, including "chain-of-thought" and its variants, fall short in capturing the introspective nature of knowledge required in commonsense reasoning, and in accounting for the mutual adaptation between the generation and utilization of knowledge. We propose a novel method to develop an introspective commonsense reasoner, Crystal. To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge. The knowledge introspection and knowledge-grounded reasoning modes of the model are tuned via reinforcement learning to mutually adapt, where the reward derives from the feedback given by the model itself. Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process. Our work ultimately validates the feasibility and potential of reinforcing a neural model with self-feedback.
doi_str_mv 10.48550/arxiv.2310.04921
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_04921</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_04921</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-8ec5cba4f822441849ce14f461bb7a3c30bc349101aa69cdc4cb5ac8b5a634ad3</originalsourceid><addsrcrecordid>eNotj8FqwkAURWfTRVE_oKvmB2JnMi9x4kKQUK0gFFr34c3LGzo0TWQmaP37Ruvm3stZXDhCPCk5B5Pn8gXDrz_NMz0CCWWmHsWqCpc4YLtMdt0Q-nhkGvyJkw_G2Hcc4rh85_pA3CRnP3wln9y6dMPcWKTvqXhw2Eae3XsiDpvXQ_WW7t-3u2q9T7FYqNQw5WQRnMkyAGWgJFbgoFDWLlCTlpY0lEoqxKKkhoBsjmTGKDRgoyfi-f_2JlAfg__BcKmvIvVNRP8Bp3hDqg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Crystal: Introspective Reasoners Reinforced with Self-Feedback</title><source>arXiv.org</source><creator>Liu, Jiacheng ; Pasunuru, Ramakanth ; Hajishirzi, Hannaneh ; Choi, Yejin ; Celikyilmaz, Asli</creator><creatorcontrib>Liu, Jiacheng ; Pasunuru, Ramakanth ; Hajishirzi, Hannaneh ; Choi, Yejin ; Celikyilmaz, Asli</creatorcontrib><description>Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized. However, existing implementations, including "chain-of-thought" and its variants, fall short in capturing the introspective nature of knowledge required in commonsense reasoning, and in accounting for the mutual adaptation between the generation and utilization of knowledge. We propose a novel method to develop an introspective commonsense reasoner, Crystal. To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge. The knowledge introspection and knowledge-grounded reasoning modes of the model are tuned via reinforcement learning to mutually adapt, where the reward derives from the feedback given by the model itself. Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process. Our work ultimately validates the feasibility and potential of reinforcing a neural model with self-feedback.</description><identifier>DOI: 10.48550/arxiv.2310.04921</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.04921$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.04921$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Liu, Jiacheng</creatorcontrib><creatorcontrib>Pasunuru, Ramakanth</creatorcontrib><creatorcontrib>Hajishirzi, Hannaneh</creatorcontrib><creatorcontrib>Choi, Yejin</creatorcontrib><creatorcontrib>Celikyilmaz, Asli</creatorcontrib><title>Crystal: Introspective Reasoners Reinforced with Self-Feedback</title><description>Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized. However, existing implementations, including "chain-of-thought" and its variants, fall short in capturing the introspective nature of knowledge required in commonsense reasoning, and in accounting for the mutual adaptation between the generation and utilization of knowledge. We propose a novel method to develop an introspective commonsense reasoner, Crystal. To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge. The knowledge introspection and knowledge-grounded reasoning modes of the model are tuned via reinforcement learning to mutually adapt, where the reward derives from the feedback given by the model itself. Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process. Our work ultimately validates the feasibility and potential of reinforcing a neural model with self-feedback.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8FqwkAURWfTRVE_oKvmB2JnMi9x4kKQUK0gFFr34c3LGzo0TWQmaP37Ruvm3stZXDhCPCk5B5Pn8gXDrz_NMz0CCWWmHsWqCpc4YLtMdt0Q-nhkGvyJkw_G2Hcc4rh85_pA3CRnP3wln9y6dMPcWKTvqXhw2Eae3XsiDpvXQ_WW7t-3u2q9T7FYqNQw5WQRnMkyAGWgJFbgoFDWLlCTlpY0lEoqxKKkhoBsjmTGKDRgoyfi-f_2JlAfg__BcKmvIvVNRP8Bp3hDqg</recordid><startdate>20231007</startdate><enddate>20231007</enddate><creator>Liu, Jiacheng</creator><creator>Pasunuru, Ramakanth</creator><creator>Hajishirzi, Hannaneh</creator><creator>Choi, Yejin</creator><creator>Celikyilmaz, Asli</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231007</creationdate><title>Crystal: Introspective Reasoners Reinforced with Self-Feedback</title><author>Liu, Jiacheng ; Pasunuru, Ramakanth ; Hajishirzi, Hannaneh ; Choi, Yejin ; Celikyilmaz, Asli</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-8ec5cba4f822441849ce14f461bb7a3c30bc349101aa69cdc4cb5ac8b5a634ad3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Liu, Jiacheng</creatorcontrib><creatorcontrib>Pasunuru, Ramakanth</creatorcontrib><creatorcontrib>Hajishirzi, Hannaneh</creatorcontrib><creatorcontrib>Choi, Yejin</creatorcontrib><creatorcontrib>Celikyilmaz, Asli</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Liu, Jiacheng</au><au>Pasunuru, Ramakanth</au><au>Hajishirzi, Hannaneh</au><au>Choi, Yejin</au><au>Celikyilmaz, Asli</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Crystal: Introspective Reasoners Reinforced with Self-Feedback</atitle><date>2023-10-07</date><risdate>2023</risdate><abstract>Extensive work has shown that the performance and interpretability of commonsense reasoning can be improved via knowledge-augmented reasoning methods, where the knowledge that underpins the reasoning process is explicitly verbalized and utilized. However, existing implementations, including "chain-of-thought" and its variants, fall short in capturing the introspective nature of knowledge required in commonsense reasoning, and in accounting for the mutual adaptation between the generation and utilization of knowledge. We propose a novel method to develop an introspective commonsense reasoner, Crystal. To tackle commonsense problems, it first introspects for knowledge statements related to the given question, and subsequently makes an informed prediction that is grounded in the previously introspected knowledge. The knowledge introspection and knowledge-grounded reasoning modes of the model are tuned via reinforcement learning to mutually adapt, where the reward derives from the feedback given by the model itself. Experiments show that Crystal significantly outperforms both the standard supervised finetuning and chain-of-thought distilled methods, and enhances the transparency of the commonsense reasoning process. Our work ultimately validates the feasibility and potential of reinforcing a neural model with self-feedback.</abstract><doi>10.48550/arxiv.2310.04921</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.04921
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_04921
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Crystal: Introspective Reasoners Reinforced with Self-Feedback
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T03%3A51%3A11IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Crystal:%20Introspective%20Reasoners%20Reinforced%20with%20Self-Feedback&rft.au=Liu,%20Jiacheng&rft.date=2023-10-07&rft_id=info:doi/10.48550/arxiv.2310.04921&rft_dat=%3Carxiv_GOX%3E2310_04921%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true