Premise Order Matters in Reasoning with Large Language Models

Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Chen, Xinyun, Chi, Ryan A, Wang, Xuezhi, Zhou, Denny
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Chen, Xinyun
Chi, Ryan A
Wang, Xuezhi
Zhou, Denny
description Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task. In particular, we observe that LLMs achieve the best performance when the premise order aligns with the context required in intermediate reasoning steps. For example, in deductive reasoning tasks, presenting the premises in the same order as the ground truth proof in the prompt (as opposed to random ordering) drastically increases the model's accuracy. We first examine the effect of premise ordering on deductive reasoning on a variety of LLMs, and our evaluation shows that permuting the premise order can cause a performance drop of over 30%. In addition, we release the benchmark R-GSM, based on GSM8K, to examine the ordering effect for mathematical problem-solving, and we again observe a significant drop in accuracy, relative to the original GSM8K benchmark.
doi_str_mv 10.48550/arxiv.2402.08939
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_08939</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_08939</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-4253aaea59c8ac84a9bfe88f541c058d0de0d6b59d651e90c28ae853dc9403553</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwKr-gYRJ7EnHCxao4iWlKkLdR1N7Eiy1KbLD6-8phc29d3V0j1JXFZSWEOGa01f8KGsLdQnkjDtXN89J9jGLXqcgSa94miRlHUf9IpwPYxwH_RmnV91yGuSY4_DOx7E6BNnlC3XW8y7L5X_P1Ob-brN8LNr1w9Pyti24WbjC1miYhdF5Yk-W3bYXoh5t5QEpQBAIzRZdaLASB74mFkITvLNgEM1Mzf-wp__dW4p7Tt_dr0d38jA_9OtCbQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Premise Order Matters in Reasoning with Large Language Models</title><source>arXiv.org</source><creator>Chen, Xinyun ; Chi, Ryan A ; Wang, Xuezhi ; Zhou, Denny</creator><creatorcontrib>Chen, Xinyun ; Chi, Ryan A ; Wang, Xuezhi ; Zhou, Denny</creatorcontrib><description>Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task. In particular, we observe that LLMs achieve the best performance when the premise order aligns with the context required in intermediate reasoning steps. For example, in deductive reasoning tasks, presenting the premises in the same order as the ground truth proof in the prompt (as opposed to random ordering) drastically increases the model's accuracy. We first examine the effect of premise ordering on deductive reasoning on a variety of LLMs, and our evaluation shows that permuting the premise order can cause a performance drop of over 30%. In addition, we release the benchmark R-GSM, based on GSM8K, to examine the ordering effect for mathematical problem-solving, and we again observe a significant drop in accuracy, relative to the original GSM8K benchmark.</description><identifier>DOI: 10.48550/arxiv.2402.08939</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.08939$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.08939$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Chen, Xinyun</creatorcontrib><creatorcontrib>Chi, Ryan A</creatorcontrib><creatorcontrib>Wang, Xuezhi</creatorcontrib><creatorcontrib>Zhou, Denny</creatorcontrib><title>Premise Order Matters in Reasoning with Large Language Models</title><description>Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task. In particular, we observe that LLMs achieve the best performance when the premise order aligns with the context required in intermediate reasoning steps. For example, in deductive reasoning tasks, presenting the premises in the same order as the ground truth proof in the prompt (as opposed to random ordering) drastically increases the model's accuracy. We first examine the effect of premise ordering on deductive reasoning on a variety of LLMs, and our evaluation shows that permuting the premise order can cause a performance drop of over 30%. In addition, we release the benchmark R-GSM, based on GSM8K, to examine the ordering effect for mathematical problem-solving, and we again observe a significant drop in accuracy, relative to the original GSM8K benchmark.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwKr-gYRJ7EnHCxao4iWlKkLdR1N7Eiy1KbLD6-8phc29d3V0j1JXFZSWEOGa01f8KGsLdQnkjDtXN89J9jGLXqcgSa94miRlHUf9IpwPYxwH_RmnV91yGuSY4_DOx7E6BNnlC3XW8y7L5X_P1Ob-brN8LNr1w9Pyti24WbjC1miYhdF5Yk-W3bYXoh5t5QEpQBAIzRZdaLASB74mFkITvLNgEM1Mzf-wp__dW4p7Tt_dr0d38jA_9OtCbQ</recordid><startdate>20240213</startdate><enddate>20240213</enddate><creator>Chen, Xinyun</creator><creator>Chi, Ryan A</creator><creator>Wang, Xuezhi</creator><creator>Zhou, Denny</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240213</creationdate><title>Premise Order Matters in Reasoning with Large Language Models</title><author>Chen, Xinyun ; Chi, Ryan A ; Wang, Xuezhi ; Zhou, Denny</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-4253aaea59c8ac84a9bfe88f541c058d0de0d6b59d651e90c28ae853dc9403553</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Chen, Xinyun</creatorcontrib><creatorcontrib>Chi, Ryan A</creatorcontrib><creatorcontrib>Wang, Xuezhi</creatorcontrib><creatorcontrib>Zhou, Denny</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Chen, Xinyun</au><au>Chi, Ryan A</au><au>Wang, Xuezhi</au><au>Zhou, Denny</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Premise Order Matters in Reasoning with Large Language Models</atitle><date>2024-02-13</date><risdate>2024</risdate><abstract>Large language models (LLMs) have accomplished remarkable reasoning performance in various domains. However, in the domain of reasoning tasks, we discover a frailty: LLMs are surprisingly brittle to the ordering of the premises, despite the fact that such ordering does not alter the underlying task. In particular, we observe that LLMs achieve the best performance when the premise order aligns with the context required in intermediate reasoning steps. For example, in deductive reasoning tasks, presenting the premises in the same order as the ground truth proof in the prompt (as opposed to random ordering) drastically increases the model's accuracy. We first examine the effect of premise ordering on deductive reasoning on a variety of LLMs, and our evaluation shows that permuting the premise order can cause a performance drop of over 30%. In addition, we release the benchmark R-GSM, based on GSM8K, to examine the ordering effect for mathematical problem-solving, and we again observe a significant drop in accuracy, relative to the original GSM8K benchmark.</abstract><doi>10.48550/arxiv.2402.08939</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2402.08939
ispartof
issn
language eng
recordid cdi_arxiv_primary_2402_08939
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Premise Order Matters in Reasoning with Large Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T19%3A53%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Premise%20Order%20Matters%20in%20Reasoning%20with%20Large%20Language%20Models&rft.au=Chen,%20Xinyun&rft.date=2024-02-13&rft_id=info:doi/10.48550/arxiv.2402.08939&rft_dat=%3Carxiv_GOX%3E2402_08939%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true