Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning

Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning. Motivated by the observatio...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Huang, Xijie, Zhang, Li Lyna, Cheng, Kwang-Ting, Yang, Fan, Yang, Mao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Huang, Xijie
Zhang, Li Lyna
Cheng, Kwang-Ting
Yang, Fan
Yang, Mao
description Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning. Motivated by the observation that adding more concise CoT examples in the prompt can improve LLM reasoning performance, CoT-Influx employs a coarse-to-fine pruner to maximize the input of effective and concise CoT examples. The pruner first selects as many crucial CoT examples as possible and then prunes unimportant tokens to fit the context window. A math reasoning dataset with diverse difficulty levels and reasoning steps is used to train the pruner, along with a math-specialized reinforcement learning approach. As a result, by enabling more CoT examples with double the context window size in tokens, CoT-Influx significantly outperforms various prompting baselines across various LLMs (LLaMA2-7B, 13B, 70B) and 5 math datasets, achieving up to 4.55% absolute improvements. Remarkably, without any fine-tuning, LLaMA2-70B with CoT-Influx surpasses GPT-3.5 and a wide range of larger LLMs (PaLM, Minerva 540B, etc.) on the GSM8K. CoT-Influx serves as a plug-and-play module for LLMs and is compatible with most existing reasoning prompting techniques, such as self-consistency and self-verification.
doi_str_mv 10.48550/arxiv.2312.08901
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2312_08901</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2312_08901</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2312_089013</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjY00jOwsDQw5GRwdUstTy1SyCxW8M0vSrVScMrPLy7JzEtX8PHxVQhKTSzOzwPxyjNLMoDczLy0_KLk1BQF5_y8ktSKEoWAolKQPA8Da1piTnEqL5TmZpB3cw1x9tAFWxhfUJSZm1hUGQ-yOB5ssTFhFQDJdDfe</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning</title><source>arXiv.org</source><creator>Huang, Xijie ; Zhang, Li Lyna ; Cheng, Kwang-Ting ; Yang, Fan ; Yang, Mao</creator><creatorcontrib>Huang, Xijie ; Zhang, Li Lyna ; Cheng, Kwang-Ting ; Yang, Fan ; Yang, Mao</creatorcontrib><description>Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning. Motivated by the observation that adding more concise CoT examples in the prompt can improve LLM reasoning performance, CoT-Influx employs a coarse-to-fine pruner to maximize the input of effective and concise CoT examples. The pruner first selects as many crucial CoT examples as possible and then prunes unimportant tokens to fit the context window. A math reasoning dataset with diverse difficulty levels and reasoning steps is used to train the pruner, along with a math-specialized reinforcement learning approach. As a result, by enabling more CoT examples with double the context window size in tokens, CoT-Influx significantly outperforms various prompting baselines across various LLMs (LLaMA2-7B, 13B, 70B) and 5 math datasets, achieving up to 4.55% absolute improvements. Remarkably, without any fine-tuning, LLaMA2-70B with CoT-Influx surpasses GPT-3.5 and a wide range of larger LLMs (PaLM, Minerva 540B, etc.) on the GSM8K. CoT-Influx serves as a plug-and-play module for LLMs and is compatible with most existing reasoning prompting techniques, such as self-consistency and self-verification.</description><identifier>DOI: 10.48550/arxiv.2312.08901</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language</subject><creationdate>2023-12</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2312.08901$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2312.08901$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Huang, Xijie</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Cheng, Kwang-Ting</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Yang, Mao</creatorcontrib><title>Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning</title><description>Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning. Motivated by the observation that adding more concise CoT examples in the prompt can improve LLM reasoning performance, CoT-Influx employs a coarse-to-fine pruner to maximize the input of effective and concise CoT examples. The pruner first selects as many crucial CoT examples as possible and then prunes unimportant tokens to fit the context window. A math reasoning dataset with diverse difficulty levels and reasoning steps is used to train the pruner, along with a math-specialized reinforcement learning approach. As a result, by enabling more CoT examples with double the context window size in tokens, CoT-Influx significantly outperforms various prompting baselines across various LLMs (LLaMA2-7B, 13B, 70B) and 5 math datasets, achieving up to 4.55% absolute improvements. Remarkably, without any fine-tuning, LLaMA2-70B with CoT-Influx surpasses GPT-3.5 and a wide range of larger LLMs (PaLM, Minerva 540B, etc.) on the GSM8K. CoT-Influx serves as a plug-and-play module for LLMs and is compatible with most existing reasoning prompting techniques, such as self-consistency and self-verification.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjY00jOwsDQw5GRwdUstTy1SyCxW8M0vSrVScMrPLy7JzEtX8PHxVQhKTSzOzwPxyjNLMoDczLy0_KLk1BQF5_y8ktSKEoWAolKQPA8Da1piTnEqL5TmZpB3cw1x9tAFWxhfUJSZm1hUGQ-yOB5ssTFhFQDJdDfe</recordid><startdate>20231214</startdate><enddate>20231214</enddate><creator>Huang, Xijie</creator><creator>Zhang, Li Lyna</creator><creator>Cheng, Kwang-Ting</creator><creator>Yang, Fan</creator><creator>Yang, Mao</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231214</creationdate><title>Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning</title><author>Huang, Xijie ; Zhang, Li Lyna ; Cheng, Kwang-Ting ; Yang, Fan ; Yang, Mao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2312_089013</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Xijie</creatorcontrib><creatorcontrib>Zhang, Li Lyna</creatorcontrib><creatorcontrib>Cheng, Kwang-Ting</creatorcontrib><creatorcontrib>Yang, Fan</creatorcontrib><creatorcontrib>Yang, Mao</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Xijie</au><au>Zhang, Li Lyna</au><au>Cheng, Kwang-Ting</au><au>Yang, Fan</au><au>Yang, Mao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning</atitle><date>2023-12-14</date><risdate>2023</risdate><abstract>Large Language Models (LLMs) have shown impressive capabilities, yet they still struggle with math reasoning. In this work, we propose CoT-Influx, a novel approach that pushes the boundary of few-shot Chain-of-Thoughts (CoT) learning to improve LLM mathematical reasoning. Motivated by the observation that adding more concise CoT examples in the prompt can improve LLM reasoning performance, CoT-Influx employs a coarse-to-fine pruner to maximize the input of effective and concise CoT examples. The pruner first selects as many crucial CoT examples as possible and then prunes unimportant tokens to fit the context window. A math reasoning dataset with diverse difficulty levels and reasoning steps is used to train the pruner, along with a math-specialized reinforcement learning approach. As a result, by enabling more CoT examples with double the context window size in tokens, CoT-Influx significantly outperforms various prompting baselines across various LLMs (LLaMA2-7B, 13B, 70B) and 5 math datasets, achieving up to 4.55% absolute improvements. Remarkably, without any fine-tuning, LLaMA2-70B with CoT-Influx surpasses GPT-3.5 and a wide range of larger LLMs (PaLM, Minerva 540B, etc.) on the GSM8K. CoT-Influx serves as a plug-and-play module for LLMs and is compatible with most existing reasoning prompting techniques, such as self-consistency and self-verification.</abstract><doi>10.48550/arxiv.2312.08901</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2312.08901
ispartof
issn
language eng
recordid cdi_arxiv_primary_2312_08901
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
title Fewer is More: Boosting LLM Reasoning with Reinforced Context Pruning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T07%3A34%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fewer%20is%20More:%20Boosting%20LLM%20Reasoning%20with%20Reinforced%20Context%20Pruning&rft.au=Huang,%20Xijie&rft.date=2023-12-14&rft_id=info:doi/10.48550/arxiv.2312.08901&rft_dat=%3Carxiv_GOX%3E2312_08901%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true