Dynamic Demonstrations Controller for In-Context Learning

In-context learning (ICL) is a new paradigm for natural language processing (NLP), where a large language model (LLM) observes a small number of demonstrations and a test instance as its input, and directly makes predictions without updating model parameters. Previous studies have revealed that ICL...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-12
Hauptverfasser: Zhao, Fei, Pang, Taotian, Wu, Zhen, Zheng, Ma, Huang, Shujian, Dai, Xinyu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Zhao, Fei
Pang, Taotian
Wu, Zhen
Zheng, Ma
Huang, Shujian
Dai, Xinyu
description In-context learning (ICL) is a new paradigm for natural language processing (NLP), where a large language model (LLM) observes a small number of demonstrations and a test instance as its input, and directly makes predictions without updating model parameters. Previous studies have revealed that ICL is sensitive to the selection and the ordering of demonstrations. However, there are few studies regarding the impact of the demonstration number on the ICL performance within a limited input length of LLM, because it is commonly believed that the number of demonstrations is positively correlated with model performance. In this paper, we found this conclusion does not always hold true. Through pilot experiments, we discover that increasing the number of demonstrations does not necessarily lead to improved performance. Building upon this insight, we propose a Dynamic Demonstrations Controller (D\(^2\)Controller), which can improve the ICL performance by adjusting the number of demonstrations dynamically. The experimental results show that D\(^2\)Controller yields a 4.6% relative improvement on ten different sizes of LLMs across ten datasets. Moreover, we also extend our method to previous ICL models and achieve competitive results.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2871978052</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2871978052</sourcerecordid><originalsourceid>FETCH-proquest_journals_28719780523</originalsourceid><addsrcrecordid>eNqNikEKwjAQAIMgWLR_CHgOpBtr0nOrKHj0XoKkktJudJOC_t4KPsDTwMwsWAZKFcLsAFYsj7GXUsJeQ1mqjFXNG-3ob7xxY8CYyCY_k9cBE4VhcMS7QPyM4mvcK_GLs4Qe7xu27OwQXf7jmm2Ph2t9Eg8Kz8nF1PZhIpxTC0YXlTayBPXf9QGoFzbD</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2871978052</pqid></control><display><type>article</type><title>Dynamic Demonstrations Controller for In-Context Learning</title><source>Free E- Journals</source><creator>Zhao, Fei ; Pang, Taotian ; Wu, Zhen ; Zheng, Ma ; Huang, Shujian ; Dai, Xinyu</creator><creatorcontrib>Zhao, Fei ; Pang, Taotian ; Wu, Zhen ; Zheng, Ma ; Huang, Shujian ; Dai, Xinyu</creatorcontrib><description>In-context learning (ICL) is a new paradigm for natural language processing (NLP), where a large language model (LLM) observes a small number of demonstrations and a test instance as its input, and directly makes predictions without updating model parameters. Previous studies have revealed that ICL is sensitive to the selection and the ordering of demonstrations. However, there are few studies regarding the impact of the demonstration number on the ICL performance within a limited input length of LLM, because it is commonly believed that the number of demonstrations is positively correlated with model performance. In this paper, we found this conclusion does not always hold true. Through pilot experiments, we discover that increasing the number of demonstrations does not necessarily lead to improved performance. Building upon this insight, we propose a Dynamic Demonstrations Controller (D\(^2\)Controller), which can improve the ICL performance by adjusting the number of demonstrations dynamically. The experimental results show that D\(^2\)Controller yields a 4.6% relative improvement on ten different sizes of LLMs across ten datasets. Moreover, we also extend our method to previous ICL models and achieve competitive results.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Context ; Controllers ; Large language models ; Natural language processing ; Parameter sensitivity</subject><ispartof>arXiv.org, 2024-12</ispartof><rights>2024. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Zhao, Fei</creatorcontrib><creatorcontrib>Pang, Taotian</creatorcontrib><creatorcontrib>Wu, Zhen</creatorcontrib><creatorcontrib>Zheng, Ma</creatorcontrib><creatorcontrib>Huang, Shujian</creatorcontrib><creatorcontrib>Dai, Xinyu</creatorcontrib><title>Dynamic Demonstrations Controller for In-Context Learning</title><title>arXiv.org</title><description>In-context learning (ICL) is a new paradigm for natural language processing (NLP), where a large language model (LLM) observes a small number of demonstrations and a test instance as its input, and directly makes predictions without updating model parameters. Previous studies have revealed that ICL is sensitive to the selection and the ordering of demonstrations. However, there are few studies regarding the impact of the demonstration number on the ICL performance within a limited input length of LLM, because it is commonly believed that the number of demonstrations is positively correlated with model performance. In this paper, we found this conclusion does not always hold true. Through pilot experiments, we discover that increasing the number of demonstrations does not necessarily lead to improved performance. Building upon this insight, we propose a Dynamic Demonstrations Controller (D\(^2\)Controller), which can improve the ICL performance by adjusting the number of demonstrations dynamically. The experimental results show that D\(^2\)Controller yields a 4.6% relative improvement on ten different sizes of LLMs across ten datasets. Moreover, we also extend our method to previous ICL models and achieve competitive results.</description><subject>Context</subject><subject>Controllers</subject><subject>Large language models</subject><subject>Natural language processing</subject><subject>Parameter sensitivity</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNikEKwjAQAIMgWLR_CHgOpBtr0nOrKHj0XoKkktJudJOC_t4KPsDTwMwsWAZKFcLsAFYsj7GXUsJeQ1mqjFXNG-3ob7xxY8CYyCY_k9cBE4VhcMS7QPyM4mvcK_GLs4Qe7xu27OwQXf7jmm2Ph2t9Eg8Kz8nF1PZhIpxTC0YXlTayBPXf9QGoFzbD</recordid><startdate>20241211</startdate><enddate>20241211</enddate><creator>Zhao, Fei</creator><creator>Pang, Taotian</creator><creator>Wu, Zhen</creator><creator>Zheng, Ma</creator><creator>Huang, Shujian</creator><creator>Dai, Xinyu</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20241211</creationdate><title>Dynamic Demonstrations Controller for In-Context Learning</title><author>Zhao, Fei ; Pang, Taotian ; Wu, Zhen ; Zheng, Ma ; Huang, Shujian ; Dai, Xinyu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_28719780523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Context</topic><topic>Controllers</topic><topic>Large language models</topic><topic>Natural language processing</topic><topic>Parameter sensitivity</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhao, Fei</creatorcontrib><creatorcontrib>Pang, Taotian</creatorcontrib><creatorcontrib>Wu, Zhen</creatorcontrib><creatorcontrib>Zheng, Ma</creatorcontrib><creatorcontrib>Huang, Shujian</creatorcontrib><creatorcontrib>Dai, Xinyu</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhao, Fei</au><au>Pang, Taotian</au><au>Wu, Zhen</au><au>Zheng, Ma</au><au>Huang, Shujian</au><au>Dai, Xinyu</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Dynamic Demonstrations Controller for In-Context Learning</atitle><jtitle>arXiv.org</jtitle><date>2024-12-11</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>In-context learning (ICL) is a new paradigm for natural language processing (NLP), where a large language model (LLM) observes a small number of demonstrations and a test instance as its input, and directly makes predictions without updating model parameters. Previous studies have revealed that ICL is sensitive to the selection and the ordering of demonstrations. However, there are few studies regarding the impact of the demonstration number on the ICL performance within a limited input length of LLM, because it is commonly believed that the number of demonstrations is positively correlated with model performance. In this paper, we found this conclusion does not always hold true. Through pilot experiments, we discover that increasing the number of demonstrations does not necessarily lead to improved performance. Building upon this insight, we propose a Dynamic Demonstrations Controller (D\(^2\)Controller), which can improve the ICL performance by adjusting the number of demonstrations dynamically. The experimental results show that D\(^2\)Controller yields a 4.6% relative improvement on ten different sizes of LLMs across ten datasets. Moreover, we also extend our method to previous ICL models and achieve competitive results.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-12
issn 2331-8422
language eng
recordid cdi_proquest_journals_2871978052
source Free E- Journals
subjects Context
Controllers
Large language models
Natural language processing
Parameter sensitivity
title Dynamic Demonstrations Controller for In-Context Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T17%3A43%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Dynamic%20Demonstrations%20Controller%20for%20In-Context%20Learning&rft.jtitle=arXiv.org&rft.au=Zhao,%20Fei&rft.date=2024-12-11&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2871978052%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2871978052&rft_id=info:pmid/&rfr_iscdi=true