Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems
Benchmarks are critical for measuring progress of math reasoning abilities of Large Language Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks have been proposed, their...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-10 |
---|---|
Hauptverfasser: | , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Miner, Stephen Takashima, Yoshiki Han, Simeng Erata, Ferhat Antonopoulos, Timos Piskac, Ruzica Shapiro, Scott J |
description | Benchmarks are critical for measuring progress of math reasoning abilities of Large Language Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks have been proposed, their creation is often manual and expensive. We present Scheherazade, an automated approach for producing challenging mathematical reasoning benchmarks by logically chaining mathematical reasoning problems. We propose two different chaining methods, forward chaining and backward chaining, which require reasoning forward and backward through the chain respectively. We apply Scheherazade on GSM8K to create GSM8K-Scheherazade and evaluate 3 frontier LLMs and OpenAI's o1-preview on it. We show that while frontier models' performance declines precipitously at only a few questions chained, a preliminary evaluation suggests o1-preview performance persists up to 5 questions chained backwards. In addition, while all other models perform worse when problems are chained backwards, o1-preview performs better on backward-chained benchmarks. We will release the dataset and code publicly. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3113848338</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3113848338</sourcerecordid><originalsourceid>FETCH-proquest_journals_31138483383</originalsourceid><addsrcrecordid>eNqNikELgjAYQEcQJOV_GHQe6KY1uorRQSEq6ChfNd3EttpcQb8-g-jc6cF7b4QCylhMeELpBIXOtVEU0cWSpikL0HF_lkIKCy-4iBXOH9B56JVucCZBaWJqcpDGN7LHJfQS7wQ4oz9daVwUpcNPNejfvLXm1Imrm6FxDZ0T4ZdTNF_nh2xDbtbcvXB91Rpv9ZAqFseMJ5wxzv673uKuQKc</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3113848338</pqid></control><display><type>article</type><title>Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems</title><source>Free E- Journals</source><creator>Miner, Stephen ; Takashima, Yoshiki ; Han, Simeng ; Erata, Ferhat ; Antonopoulos, Timos ; Piskac, Ruzica ; Shapiro, Scott J</creator><creatorcontrib>Miner, Stephen ; Takashima, Yoshiki ; Han, Simeng ; Erata, Ferhat ; Antonopoulos, Timos ; Piskac, Ruzica ; Shapiro, Scott J</creatorcontrib><description>Benchmarks are critical for measuring progress of math reasoning abilities of Large Language Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks have been proposed, their creation is often manual and expensive. We present Scheherazade, an automated approach for producing challenging mathematical reasoning benchmarks by logically chaining mathematical reasoning problems. We propose two different chaining methods, forward chaining and backward chaining, which require reasoning forward and backward through the chain respectively. We apply Scheherazade on GSM8K to create GSM8K-Scheherazade and evaluate 3 frontier LLMs and OpenAI's o1-preview on it. We show that while frontier models' performance declines precipitously at only a few questions chained, a preliminary evaluation suggests o1-preview performance persists up to 5 questions chained backwards. In addition, while all other models perform worse when problems are chained backwards, o1-preview performs better on backward-chained benchmarks. We will release the dataset and code publicly.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Benchmarks ; Chaining ; Large language models ; Performance evaluation ; Questions ; Reasoning</subject><ispartof>arXiv.org, 2024-10</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>778,782</link.rule.ids></links><search><creatorcontrib>Miner, Stephen</creatorcontrib><creatorcontrib>Takashima, Yoshiki</creatorcontrib><creatorcontrib>Han, Simeng</creatorcontrib><creatorcontrib>Erata, Ferhat</creatorcontrib><creatorcontrib>Antonopoulos, Timos</creatorcontrib><creatorcontrib>Piskac, Ruzica</creatorcontrib><creatorcontrib>Shapiro, Scott J</creatorcontrib><title>Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems</title><title>arXiv.org</title><description>Benchmarks are critical for measuring progress of math reasoning abilities of Large Language Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks have been proposed, their creation is often manual and expensive. We present Scheherazade, an automated approach for producing challenging mathematical reasoning benchmarks by logically chaining mathematical reasoning problems. We propose two different chaining methods, forward chaining and backward chaining, which require reasoning forward and backward through the chain respectively. We apply Scheherazade on GSM8K to create GSM8K-Scheherazade and evaluate 3 frontier LLMs and OpenAI's o1-preview on it. We show that while frontier models' performance declines precipitously at only a few questions chained, a preliminary evaluation suggests o1-preview performance persists up to 5 questions chained backwards. In addition, while all other models perform worse when problems are chained backwards, o1-preview performs better on backward-chained benchmarks. We will release the dataset and code publicly.</description><subject>Benchmarks</subject><subject>Chaining</subject><subject>Large language models</subject><subject>Performance evaluation</subject><subject>Questions</subject><subject>Reasoning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikELgjAYQEcQJOV_GHQe6KY1uorRQSEq6ChfNd3EttpcQb8-g-jc6cF7b4QCylhMeELpBIXOtVEU0cWSpikL0HF_lkIKCy-4iBXOH9B56JVucCZBaWJqcpDGN7LHJfQS7wQ4oz9daVwUpcNPNejfvLXm1Imrm6FxDZ0T4ZdTNF_nh2xDbtbcvXB91Rpv9ZAqFseMJ5wxzv673uKuQKc</recordid><startdate>20241011</startdate><enddate>20241011</enddate><creator>Miner, Stephen</creator><creator>Takashima, Yoshiki</creator><creator>Han, Simeng</creator><creator>Erata, Ferhat</creator><creator>Antonopoulos, Timos</creator><creator>Piskac, Ruzica</creator><creator>Shapiro, Scott J</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241011</creationdate><title>Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems</title><author>Miner, Stephen ; Takashima, Yoshiki ; Han, Simeng ; Erata, Ferhat ; Antonopoulos, Timos ; Piskac, Ruzica ; Shapiro, Scott J</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_31138483383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Benchmarks</topic><topic>Chaining</topic><topic>Large language models</topic><topic>Performance evaluation</topic><topic>Questions</topic><topic>Reasoning</topic><toplevel>online_resources</toplevel><creatorcontrib>Miner, Stephen</creatorcontrib><creatorcontrib>Takashima, Yoshiki</creatorcontrib><creatorcontrib>Han, Simeng</creatorcontrib><creatorcontrib>Erata, Ferhat</creatorcontrib><creatorcontrib>Antonopoulos, Timos</creatorcontrib><creatorcontrib>Piskac, Ruzica</creatorcontrib><creatorcontrib>Shapiro, Scott J</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection (ProQuest)</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Miner, Stephen</au><au>Takashima, Yoshiki</au><au>Han, Simeng</au><au>Erata, Ferhat</au><au>Antonopoulos, Timos</au><au>Piskac, Ruzica</au><au>Shapiro, Scott J</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems</atitle><jtitle>arXiv.org</jtitle><date>2024-10-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Benchmarks are critical for measuring progress of math reasoning abilities of Large Language Models (LLMs). However, existing widely-used benchmarks such as GSM8K have been rendered less useful as multiple cutting-edge LLMs achieve over 94% accuracy. While harder benchmarks have been proposed, their creation is often manual and expensive. We present Scheherazade, an automated approach for producing challenging mathematical reasoning benchmarks by logically chaining mathematical reasoning problems. We propose two different chaining methods, forward chaining and backward chaining, which require reasoning forward and backward through the chain respectively. We apply Scheherazade on GSM8K to create GSM8K-Scheherazade and evaluate 3 frontier LLMs and OpenAI's o1-preview on it. We show that while frontier models' performance declines precipitously at only a few questions chained, a preliminary evaluation suggests o1-preview performance persists up to 5 questions chained backwards. In addition, while all other models perform worse when problems are chained backwards, o1-preview performs better on backward-chained benchmarks. We will release the dataset and code publicly.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-10 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_3113848338 |
source | Free E- Journals |
subjects | Benchmarks Chaining Large language models Performance evaluation Questions Reasoning |
title | Scheherazade: Evaluating Chain-of-Thought Math Reasoning in LLMs with Chain-of-Problems |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-16T03%3A48%3A22IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Scheherazade:%20Evaluating%20Chain-of-Thought%20Math%20Reasoning%20in%20LLMs%20with%20Chain-of-Problems&rft.jtitle=arXiv.org&rft.au=Miner,%20Stephen&rft.date=2024-10-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3113848338%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3113848338&rft_id=info:pmid/&rfr_iscdi=true |