Just read twice: closing the recall gap for recurrent language models

Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2024-07
Hauptverfasser: Arora, Simran, Timalsina, Aman, Singhal, Aaryan, Spector, Benjamin, Eyuboglu, Sabri, Zhao, Xinyi, Rao, Ashish, Atri Rudra, Ré, Christopher
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Arora, Simran
Timalsina, Aman
Singhal, Aaryan
Spector, Benjamin
Eyuboglu, Sabri
Zhao, Xinyi
Rao, Ashish
Atri Rudra
Ré, Christopher
description Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives \(11.0 \pm 1.3\) points of improvement, averaged across \(16\) recurrent LMs and the \(6\) ICL tasks, with \(11.9\times\) higher throughput than FlashAttention-2 for generation prefill (length \(32\)k, batch size \(16\), NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides \(99\%\) of Transformer quality at \(360\)M params., \(30\)B tokens and \(96\%\) at \(1.3\)B params., \(50\)B tokens on average across the tasks, with \(19.2\times\) higher throughput for prefill than FA2.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_3077528894</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>3077528894</sourcerecordid><originalsourceid>FETCH-proquest_journals_30775288943</originalsourceid><addsrcrecordid>eNqNilELgjAURkcQJOV_uNCzsDZN6zWM6Ll3GXqdytpsd6O_n0E_oKePc863YomQ8pBVuRAblhJNnHNxLEVRyITV90gBPKoOwnts8QytcTRaDWHAxbfKGNBqht75L0bv0QYwyuqoNMLTdWhox9a9MoTpb7dsf60fl1s2e_eKSKGZXPR2SY3kZVmIqjrl8r_XB1uUOtA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>3077528894</pqid></control><display><type>article</type><title>Just read twice: closing the recall gap for recurrent language models</title><source>Free E- Journals</source><creator>Arora, Simran ; Timalsina, Aman ; Singhal, Aaryan ; Spector, Benjamin ; Eyuboglu, Sabri ; Zhao, Xinyi ; Rao, Ashish ; Atri Rudra ; Ré, Christopher</creator><creatorcontrib>Arora, Simran ; Timalsina, Aman ; Singhal, Aaryan ; Spector, Benjamin ; Eyuboglu, Sabri ; Zhao, Xinyi ; Rao, Ashish ; Atri Rudra ; Ré, Christopher</creatorcontrib><description>Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives \(11.0 \pm 1.3\) points of improvement, averaged across \(16\) recurrent LMs and the \(6\) ICL tasks, with \(11.9\times\) higher throughput than FlashAttention-2 for generation prefill (length \(32\)k, batch size \(16\), NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides \(99\%\) of Transformer quality at \(360\)M params., \(30\)B tokens and \(96\%\) at \(1.3\)B params., \(50\)B tokens on average across the tasks, with \(19.2\times\) higher throughput for prefill than FA2.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Context ; Hardness ; Large language models ; Recall ; Transformers</subject><ispartof>arXiv.org, 2024-07</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Arora, Simran</creatorcontrib><creatorcontrib>Timalsina, Aman</creatorcontrib><creatorcontrib>Singhal, Aaryan</creatorcontrib><creatorcontrib>Spector, Benjamin</creatorcontrib><creatorcontrib>Eyuboglu, Sabri</creatorcontrib><creatorcontrib>Zhao, Xinyi</creatorcontrib><creatorcontrib>Rao, Ashish</creatorcontrib><creatorcontrib>Atri Rudra</creatorcontrib><creatorcontrib>Ré, Christopher</creatorcontrib><title>Just read twice: closing the recall gap for recurrent language models</title><title>arXiv.org</title><description>Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives \(11.0 \pm 1.3\) points of improvement, averaged across \(16\) recurrent LMs and the \(6\) ICL tasks, with \(11.9\times\) higher throughput than FlashAttention-2 for generation prefill (length \(32\)k, batch size \(16\), NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides \(99\%\) of Transformer quality at \(360\)M params., \(30\)B tokens and \(96\%\) at \(1.3\)B params., \(50\)B tokens on average across the tasks, with \(19.2\times\) higher throughput for prefill than FA2.</description><subject>Algorithms</subject><subject>Context</subject><subject>Hardness</subject><subject>Large language models</subject><subject>Recall</subject><subject>Transformers</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNilELgjAURkcQJOV_uNCzsDZN6zWM6Ll3GXqdytpsd6O_n0E_oKePc863YomQ8pBVuRAblhJNnHNxLEVRyITV90gBPKoOwnts8QytcTRaDWHAxbfKGNBqht75L0bv0QYwyuqoNMLTdWhox9a9MoTpb7dsf60fl1s2e_eKSKGZXPR2SY3kZVmIqjrl8r_XB1uUOtA</recordid><startdate>20240707</startdate><enddate>20240707</enddate><creator>Arora, Simran</creator><creator>Timalsina, Aman</creator><creator>Singhal, Aaryan</creator><creator>Spector, Benjamin</creator><creator>Eyuboglu, Sabri</creator><creator>Zhao, Xinyi</creator><creator>Rao, Ashish</creator><creator>Atri Rudra</creator><creator>Ré, Christopher</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240707</creationdate><title>Just read twice: closing the recall gap for recurrent language models</title><author>Arora, Simran ; Timalsina, Aman ; Singhal, Aaryan ; Spector, Benjamin ; Eyuboglu, Sabri ; Zhao, Xinyi ; Rao, Ashish ; Atri Rudra ; Ré, Christopher</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_30775288943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Algorithms</topic><topic>Context</topic><topic>Hardness</topic><topic>Large language models</topic><topic>Recall</topic><topic>Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Arora, Simran</creatorcontrib><creatorcontrib>Timalsina, Aman</creatorcontrib><creatorcontrib>Singhal, Aaryan</creatorcontrib><creatorcontrib>Spector, Benjamin</creatorcontrib><creatorcontrib>Eyuboglu, Sabri</creatorcontrib><creatorcontrib>Zhao, Xinyi</creatorcontrib><creatorcontrib>Rao, Ashish</creatorcontrib><creatorcontrib>Atri Rudra</creatorcontrib><creatorcontrib>Ré, Christopher</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>ProQuest Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Arora, Simran</au><au>Timalsina, Aman</au><au>Singhal, Aaryan</au><au>Spector, Benjamin</au><au>Eyuboglu, Sabri</au><au>Zhao, Xinyi</au><au>Rao, Ashish</au><au>Atri Rudra</au><au>Ré, Christopher</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Just read twice: closing the recall gap for recurrent language models</atitle><jtitle>arXiv.org</jtitle><date>2024-07-07</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Recurrent large language models that compete with Transformers in language modeling perplexity are emerging at a rapid rate (e.g., Mamba, RWKV). Excitingly, these architectures use a constant amount of memory during inference. However, due to the limited memory, recurrent LMs cannot recall and use all the information in long contexts leading to brittle in-context learning (ICL) quality. A key challenge for efficient LMs is selecting what information to store versus discard. In this work, we observe the order in which information is shown to the LM impacts the selection difficulty. To formalize this, we show that the hardness of information recall reduces to the hardness of a problem called set disjointness (SD), a quintessential problem in communication complexity that requires a streaming algorithm (e.g., recurrent model) to decide whether inputted sets are disjoint. We empirically and theoretically show that the recurrent memory required to solve SD changes with set order, i.e., whether the smaller set appears first in-context. Our analysis suggests, to mitigate the reliance on data order, we can put information in the right order in-context or process prompts non-causally. Towards that end, we propose: (1) JRT-Prompt, where context gets repeated multiple times in the prompt, effectively showing the model all data orders. This gives \(11.0 \pm 1.3\) points of improvement, averaged across \(16\) recurrent LMs and the \(6\) ICL tasks, with \(11.9\times\) higher throughput than FlashAttention-2 for generation prefill (length \(32\)k, batch size \(16\), NVidia H100). We then propose (2) JRT-RNN, which uses non-causal prefix-linear-attention to process prompts and provides \(99\%\) of Transformer quality at \(360\)M params., \(30\)B tokens and \(96\%\) at \(1.3\)B params., \(50\)B tokens on average across the tasks, with \(19.2\times\) higher throughput for prefill than FA2.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-07
issn 2331-8422
language eng
recordid cdi_proquest_journals_3077528894
source Free E- Journals
subjects Algorithms
Context
Hardness
Large language models
Recall
Transformers
title Just read twice: closing the recall gap for recurrent language models
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T11%3A25%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Just%20read%20twice:%20closing%20the%20recall%20gap%20for%20recurrent%20language%20models&rft.jtitle=arXiv.org&rft.au=Arora,%20Simran&rft.date=2024-07-07&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E3077528894%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=3077528894&rft_id=info:pmid/&rfr_iscdi=true