Explicit Planning Helps Language Models in Logical Reasoning

Language models have been shown to perform remarkably well on a wide range of natural language processing tasks. In this paper, we propose LEAP, a novel system that uses language models to perform multi-step logical reasoning and incorporates explicit planning into the inference procedure. Explicit...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Zhao, Hongyu, Wang, Kangrui, Yu, Mo, Mei, Hongyuan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Zhao, Hongyu
Wang, Kangrui
Yu, Mo
Mei, Hongyuan
description Language models have been shown to perform remarkably well on a wide range of natural language processing tasks. In this paper, we propose LEAP, a novel system that uses language models to perform multi-step logical reasoning and incorporates explicit planning into the inference procedure. Explicit planning enables the system to make more informed reasoning decisions at each step by looking ahead into their future effects. Moreover, we propose a training strategy that safeguards the planning process from being led astray by spurious features. Our full system significantly outperforms other competing methods on multiple standard datasets. When using small T5 models as its core selection and deduction components, our system performs competitively compared to GPT-3 despite having only about 1B parameters (i.e., 175 times smaller than GPT-3). When using GPT-3.5, it significantly outperforms chain-of-thought prompting on the challenging PrOntoQA dataset. We have conducted extensive empirical studies to demonstrate that explicit planning plays a crucial role in the system's performance.
doi_str_mv 10.48550/arxiv.2303.15714
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_15714</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_15714</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-6a7d10b5f03c61c57d59c813aecc7ebb36f9b3b65238234a550ed22c1b786f863</originalsourceid><addsrcrecordid>eNotj71uwjAURr0wIOgDMOEXSLB94x8kFoRoQQpqhdija8eJLBknStqKvn2BdvqWo0_nELLgLC-MlGyFwy185wIY5FxqXkzJZn_rY3Dhk35ETCmklh587EdaYmq_sPX01NU-jjQkWnZtcBjp2ePYPdA5mTQYR__yvzNyed1fdoesfH877rZlhkoXmUJdc2Zlw8Ap7qSu5doZDuid095aUM3aglVSgBFQ4F3U10I4brVRjVEwI8u_26d-1Q_hisNP9cionhnwC5BhQco</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Explicit Planning Helps Language Models in Logical Reasoning</title><source>arXiv.org</source><creator>Zhao, Hongyu ; Wang, Kangrui ; Yu, Mo ; Mei, Hongyuan</creator><creatorcontrib>Zhao, Hongyu ; Wang, Kangrui ; Yu, Mo ; Mei, Hongyuan</creatorcontrib><description>Language models have been shown to perform remarkably well on a wide range of natural language processing tasks. In this paper, we propose LEAP, a novel system that uses language models to perform multi-step logical reasoning and incorporates explicit planning into the inference procedure. Explicit planning enables the system to make more informed reasoning decisions at each step by looking ahead into their future effects. Moreover, we propose a training strategy that safeguards the planning process from being led astray by spurious features. Our full system significantly outperforms other competing methods on multiple standard datasets. When using small T5 models as its core selection and deduction components, our system performs competitively compared to GPT-3 despite having only about 1B parameters (i.e., 175 times smaller than GPT-3). When using GPT-3.5, it significantly outperforms chain-of-thought prompting on the challenging PrOntoQA dataset. We have conducted extensive empirical studies to demonstrate that explicit planning plays a crucial role in the system's performance.</description><identifier>DOI: 10.48550/arxiv.2303.15714</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.15714$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.15714$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhao, Hongyu</creatorcontrib><creatorcontrib>Wang, Kangrui</creatorcontrib><creatorcontrib>Yu, Mo</creatorcontrib><creatorcontrib>Mei, Hongyuan</creatorcontrib><title>Explicit Planning Helps Language Models in Logical Reasoning</title><description>Language models have been shown to perform remarkably well on a wide range of natural language processing tasks. In this paper, we propose LEAP, a novel system that uses language models to perform multi-step logical reasoning and incorporates explicit planning into the inference procedure. Explicit planning enables the system to make more informed reasoning decisions at each step by looking ahead into their future effects. Moreover, we propose a training strategy that safeguards the planning process from being led astray by spurious features. Our full system significantly outperforms other competing methods on multiple standard datasets. When using small T5 models as its core selection and deduction components, our system performs competitively compared to GPT-3 despite having only about 1B parameters (i.e., 175 times smaller than GPT-3). When using GPT-3.5, it significantly outperforms chain-of-thought prompting on the challenging PrOntoQA dataset. We have conducted extensive empirical studies to demonstrate that explicit planning plays a crucial role in the system's performance.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAURr0wIOgDMOEXSLB94x8kFoRoQQpqhdija8eJLBknStqKvn2BdvqWo0_nELLgLC-MlGyFwy185wIY5FxqXkzJZn_rY3Dhk35ETCmklh587EdaYmq_sPX01NU-jjQkWnZtcBjp2ePYPdA5mTQYR__yvzNyed1fdoesfH877rZlhkoXmUJdc2Zlw8Ap7qSu5doZDuid095aUM3aglVSgBFQ4F3U10I4brVRjVEwI8u_26d-1Q_hisNP9cionhnwC5BhQco</recordid><startdate>20230327</startdate><enddate>20230327</enddate><creator>Zhao, Hongyu</creator><creator>Wang, Kangrui</creator><creator>Yu, Mo</creator><creator>Mei, Hongyuan</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230327</creationdate><title>Explicit Planning Helps Language Models in Logical Reasoning</title><author>Zhao, Hongyu ; Wang, Kangrui ; Yu, Mo ; Mei, Hongyuan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-6a7d10b5f03c61c57d59c813aecc7ebb36f9b3b65238234a550ed22c1b786f863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Hongyu</creatorcontrib><creatorcontrib>Wang, Kangrui</creatorcontrib><creatorcontrib>Yu, Mo</creatorcontrib><creatorcontrib>Mei, Hongyuan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhao, Hongyu</au><au>Wang, Kangrui</au><au>Yu, Mo</au><au>Mei, Hongyuan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Explicit Planning Helps Language Models in Logical Reasoning</atitle><date>2023-03-27</date><risdate>2023</risdate><abstract>Language models have been shown to perform remarkably well on a wide range of natural language processing tasks. In this paper, we propose LEAP, a novel system that uses language models to perform multi-step logical reasoning and incorporates explicit planning into the inference procedure. Explicit planning enables the system to make more informed reasoning decisions at each step by looking ahead into their future effects. Moreover, we propose a training strategy that safeguards the planning process from being led astray by spurious features. Our full system significantly outperforms other competing methods on multiple standard datasets. When using small T5 models as its core selection and deduction components, our system performs competitively compared to GPT-3 despite having only about 1B parameters (i.e., 175 times smaller than GPT-3). When using GPT-3.5, it significantly outperforms chain-of-thought prompting on the challenging PrOntoQA dataset. We have conducted extensive empirical studies to demonstrate that explicit planning plays a crucial role in the system's performance.</abstract><doi>10.48550/arxiv.2303.15714</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.15714
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_15714
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Explicit Planning Helps Language Models in Logical Reasoning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T20%3A59%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Explicit%20Planning%20Helps%20Language%20Models%20in%20Logical%20Reasoning&rft.au=Zhao,%20Hongyu&rft.date=2023-03-27&rft_id=info:doi/10.48550/arxiv.2303.15714&rft_dat=%3Carxiv_GOX%3E2303_15714%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true