Revisiting In-Context Learning with Long Context Language Models

In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the max...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Baek, Jinheon, Lee, Sun Jae, Gupta, Prakhar, Oh, Geunseob, Dalmia, Siddharth, Kolhar, Prateek
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Baek, Jinheon
Lee, Sun Jae
Gupta, Prakhar
Oh, Geunseob
Dalmia, Siddharth
Kolhar, Prateek
description In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the maximally effective set of examples. However, the recent advent of Long Context Language Models (LCLMs) has significantly increased the number of examples that can be included in context, raising an important question of whether ICL performance in a many-shot regime is still sensitive to the method of sample selection. To answer this, we revisit these approaches in the context of LCLMs through extensive experiments on 18 datasets spanning 4 tasks. Surprisingly, we observe that sophisticated example selection techniques do not yield significant improvements over a simple random sample selection method. Instead, we find that the advent of LCLMs has fundamentally shifted the challenge of ICL from that of selecting the most effective examples to that of collecting sufficient examples to fill the context window. Specifically, in certain datasets, including all available examples does not fully utilize the context window; however, by augmenting the examples in context with a simple data augmentation approach, we substantially improve ICL performance by 5%.
doi_str_mv 10.48550/arxiv.2412.16926
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2412_16926</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2412_16926</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2412_169263</originalsourceid><addsrcrecordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jM0szQy42RwCEotyyzOLMnMS1fwzNN1zs8rSa0oUfBJTSzKA4mVZ5ZkKPjkA1lwqcS89NLE9FQF3_yU1JxiHgbWtMSc4lReKM3NIO_mGuLsoQu2K76gKDM3sagyHmRnPNhOY8IqAOVnNo0</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Revisiting In-Context Learning with Long Context Language Models</title><source>arXiv.org</source><creator>Baek, Jinheon ; Lee, Sun Jae ; Gupta, Prakhar ; Oh, Geunseob ; Dalmia, Siddharth ; Kolhar, Prateek</creator><creatorcontrib>Baek, Jinheon ; Lee, Sun Jae ; Gupta, Prakhar ; Oh, Geunseob ; Dalmia, Siddharth ; Kolhar, Prateek</creatorcontrib><description>In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the maximally effective set of examples. However, the recent advent of Long Context Language Models (LCLMs) has significantly increased the number of examples that can be included in context, raising an important question of whether ICL performance in a many-shot regime is still sensitive to the method of sample selection. To answer this, we revisit these approaches in the context of LCLMs through extensive experiments on 18 datasets spanning 4 tasks. Surprisingly, we observe that sophisticated example selection techniques do not yield significant improvements over a simple random sample selection method. Instead, we find that the advent of LCLMs has fundamentally shifted the challenge of ICL from that of selecting the most effective examples to that of collecting sufficient examples to fill the context window. Specifically, in certain datasets, including all available examples does not fully utilize the context window; however, by augmenting the examples in context with a simple data augmentation approach, we substantially improve ICL performance by 5%.</description><identifier>DOI: 10.48550/arxiv.2412.16926</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-12</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2412.16926$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2412.16926$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Baek, Jinheon</creatorcontrib><creatorcontrib>Lee, Sun Jae</creatorcontrib><creatorcontrib>Gupta, Prakhar</creatorcontrib><creatorcontrib>Oh, Geunseob</creatorcontrib><creatorcontrib>Dalmia, Siddharth</creatorcontrib><creatorcontrib>Kolhar, Prateek</creatorcontrib><title>Revisiting In-Context Learning with Long Context Language Models</title><description>In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the maximally effective set of examples. However, the recent advent of Long Context Language Models (LCLMs) has significantly increased the number of examples that can be included in context, raising an important question of whether ICL performance in a many-shot regime is still sensitive to the method of sample selection. To answer this, we revisit these approaches in the context of LCLMs through extensive experiments on 18 datasets spanning 4 tasks. Surprisingly, we observe that sophisticated example selection techniques do not yield significant improvements over a simple random sample selection method. Instead, we find that the advent of LCLMs has fundamentally shifted the challenge of ICL from that of selecting the most effective examples to that of collecting sufficient examples to fill the context window. Specifically, in certain datasets, including all available examples does not fully utilize the context window; however, by augmenting the examples in context with a simple data augmentation approach, we substantially improve ICL performance by 5%.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpjYJA0NNAzsTA1NdBPLKrILNMzMjE00jM0szQy42RwCEotyyzOLMnMS1fwzNN1zs8rSa0oUfBJTSzKA4mVZ5ZkKPjkA1lwqcS89NLE9FQF3_yU1JxiHgbWtMSc4lReKM3NIO_mGuLsoQu2K76gKDM3sagyHmRnPNhOY8IqAOVnNo0</recordid><startdate>20241222</startdate><enddate>20241222</enddate><creator>Baek, Jinheon</creator><creator>Lee, Sun Jae</creator><creator>Gupta, Prakhar</creator><creator>Oh, Geunseob</creator><creator>Dalmia, Siddharth</creator><creator>Kolhar, Prateek</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241222</creationdate><title>Revisiting In-Context Learning with Long Context Language Models</title><author>Baek, Jinheon ; Lee, Sun Jae ; Gupta, Prakhar ; Oh, Geunseob ; Dalmia, Siddharth ; Kolhar, Prateek</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2412_169263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Baek, Jinheon</creatorcontrib><creatorcontrib>Lee, Sun Jae</creatorcontrib><creatorcontrib>Gupta, Prakhar</creatorcontrib><creatorcontrib>Oh, Geunseob</creatorcontrib><creatorcontrib>Dalmia, Siddharth</creatorcontrib><creatorcontrib>Kolhar, Prateek</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Baek, Jinheon</au><au>Lee, Sun Jae</au><au>Gupta, Prakhar</au><au>Oh, Geunseob</au><au>Dalmia, Siddharth</au><au>Kolhar, Prateek</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Revisiting In-Context Learning with Long Context Language Models</atitle><date>2024-12-22</date><risdate>2024</risdate><abstract>In-Context Learning (ICL) is a technique by which language models make predictions based on examples provided in their input context. Previously, their context window size imposed a limit on the number of examples that can be shown, making example selection techniques crucial for identifying the maximally effective set of examples. However, the recent advent of Long Context Language Models (LCLMs) has significantly increased the number of examples that can be included in context, raising an important question of whether ICL performance in a many-shot regime is still sensitive to the method of sample selection. To answer this, we revisit these approaches in the context of LCLMs through extensive experiments on 18 datasets spanning 4 tasks. Surprisingly, we observe that sophisticated example selection techniques do not yield significant improvements over a simple random sample selection method. Instead, we find that the advent of LCLMs has fundamentally shifted the challenge of ICL from that of selecting the most effective examples to that of collecting sufficient examples to fill the context window. Specifically, in certain datasets, including all available examples does not fully utilize the context window; however, by augmenting the examples in context with a simple data augmentation approach, we substantially improve ICL performance by 5%.</abstract><doi>10.48550/arxiv.2412.16926</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2412.16926
ispartof
issn
language eng
recordid cdi_arxiv_primary_2412_16926
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Revisiting In-Context Learning with Long Context Language Models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-13T00%3A53%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Revisiting%20In-Context%20Learning%20with%20Long%20Context%20Language%20Models&rft.au=Baek,%20Jinheon&rft.date=2024-12-22&rft_id=info:doi/10.48550/arxiv.2412.16926&rft_dat=%3Carxiv_GOX%3E2412_16926%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true