Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula

Efficient state space models (SSMs), such as linear recurrent neural networks and linear attention variants, offer computational advantages over Transformers but struggle with tasks requiring long-range in-context retrieval-like text copying, associative recall, and question answering over long cont...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Blouir, Sam, Smith, Jimmy T. H, Anastasopoulos, Antonios, Shehu, Amarda
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Blouir, Sam
Smith, Jimmy T. H
Anastasopoulos, Antonios
Shehu, Amarda
description Efficient state space models (SSMs), such as linear recurrent neural networks and linear attention variants, offer computational advantages over Transformers but struggle with tasks requiring long-range in-context retrieval-like text copying, associative recall, and question answering over long contexts. Previous efforts to address these challenges have focused on architectural modifications, often reintroducing computational inefficiencies. In this paper, we propose a novel training procedure, Birdie, that significantly enhances the in-context retrieval capabilities of SSMs without altering their architecture. Our approach combines bidirectional input processing with dynamic mixtures of specialized pre-training objectives, optimized via reinforcement learning. We introduce a new bidirectional SSM architecture that seamlessly transitions from bidirectional context processing to causal generation. Experimental evaluations demonstrate that Birdie markedly improves performance on retrieval-intensive tasks such as multi-number phone book lookup, long paragraph question-answering, and infilling. This narrows the performance gap with Transformers, while retaining computational efficiency. Our findings highlight the importance of training procedures in leveraging the fixed-state capacity of SSMs, offering a new direction to advance their capabilities. All code and pre-trained models are available at https://www.github.com/samblouir/birdie, with support for JAX and PyTorch.
doi_str_mv 10.48550/arxiv.2411.01030
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2411_01030</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2411_01030</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2411_010303</originalsourceid><addsrcrecordid>eNqFjsEOwUAURWdjIfgAK-8HWjPaJmJHERshat88nYcnNZrptOXv0dhbnbs4yT1CDJX0w2kUyTHaJ9f-JFTKl0oGsiv2C7aaaQZzXaPJ2FwgcegIkgIzgu1DU15Cw-4KB2rQam9puSYDu9ONMveZJaDREFfWclbl2BedM-YlDX7sidF6dYw3XvudFpbvaF_ptyFtG4L_xhuTMjvO</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula</title><source>arXiv.org</source><creator>Blouir, Sam ; Smith, Jimmy T. H ; Anastasopoulos, Antonios ; Shehu, Amarda</creator><creatorcontrib>Blouir, Sam ; Smith, Jimmy T. H ; Anastasopoulos, Antonios ; Shehu, Amarda</creatorcontrib><description>Efficient state space models (SSMs), such as linear recurrent neural networks and linear attention variants, offer computational advantages over Transformers but struggle with tasks requiring long-range in-context retrieval-like text copying, associative recall, and question answering over long contexts. Previous efforts to address these challenges have focused on architectural modifications, often reintroducing computational inefficiencies. In this paper, we propose a novel training procedure, Birdie, that significantly enhances the in-context retrieval capabilities of SSMs without altering their architecture. Our approach combines bidirectional input processing with dynamic mixtures of specialized pre-training objectives, optimized via reinforcement learning. We introduce a new bidirectional SSM architecture that seamlessly transitions from bidirectional context processing to causal generation. Experimental evaluations demonstrate that Birdie markedly improves performance on retrieval-intensive tasks such as multi-number phone book lookup, long paragraph question-answering, and infilling. This narrows the performance gap with Transformers, while retaining computational efficiency. Our findings highlight the importance of training procedures in leveraging the fixed-state capacity of SSMs, offering a new direction to advance their capabilities. All code and pre-trained models are available at https://www.github.com/samblouir/birdie, with support for JAX and PyTorch.</description><identifier>DOI: 10.48550/arxiv.2411.01030</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2024-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2411.01030$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2411.01030$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Blouir, Sam</creatorcontrib><creatorcontrib>Smith, Jimmy T. H</creatorcontrib><creatorcontrib>Anastasopoulos, Antonios</creatorcontrib><creatorcontrib>Shehu, Amarda</creatorcontrib><title>Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula</title><description>Efficient state space models (SSMs), such as linear recurrent neural networks and linear attention variants, offer computational advantages over Transformers but struggle with tasks requiring long-range in-context retrieval-like text copying, associative recall, and question answering over long contexts. Previous efforts to address these challenges have focused on architectural modifications, often reintroducing computational inefficiencies. In this paper, we propose a novel training procedure, Birdie, that significantly enhances the in-context retrieval capabilities of SSMs without altering their architecture. Our approach combines bidirectional input processing with dynamic mixtures of specialized pre-training objectives, optimized via reinforcement learning. We introduce a new bidirectional SSM architecture that seamlessly transitions from bidirectional context processing to causal generation. Experimental evaluations demonstrate that Birdie markedly improves performance on retrieval-intensive tasks such as multi-number phone book lookup, long paragraph question-answering, and infilling. This narrows the performance gap with Transformers, while retaining computational efficiency. Our findings highlight the importance of training procedures in leveraging the fixed-state capacity of SSMs, offering a new direction to advance their capabilities. All code and pre-trained models are available at https://www.github.com/samblouir/birdie, with support for JAX and PyTorch.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjsEOwUAURWdjIfgAK-8HWjPaJmJHERshat88nYcnNZrptOXv0dhbnbs4yT1CDJX0w2kUyTHaJ9f-JFTKl0oGsiv2C7aaaQZzXaPJ2FwgcegIkgIzgu1DU15Cw-4KB2rQam9puSYDu9ONMveZJaDREFfWclbl2BedM-YlDX7sidF6dYw3XvudFpbvaF_ptyFtG4L_xhuTMjvO</recordid><startdate>20241101</startdate><enddate>20241101</enddate><creator>Blouir, Sam</creator><creator>Smith, Jimmy T. H</creator><creator>Anastasopoulos, Antonios</creator><creator>Shehu, Amarda</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20241101</creationdate><title>Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula</title><author>Blouir, Sam ; Smith, Jimmy T. H ; Anastasopoulos, Antonios ; Shehu, Amarda</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2411_010303</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Blouir, Sam</creatorcontrib><creatorcontrib>Smith, Jimmy T. H</creatorcontrib><creatorcontrib>Anastasopoulos, Antonios</creatorcontrib><creatorcontrib>Shehu, Amarda</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Blouir, Sam</au><au>Smith, Jimmy T. H</au><au>Anastasopoulos, Antonios</au><au>Shehu, Amarda</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula</atitle><date>2024-11-01</date><risdate>2024</risdate><abstract>Efficient state space models (SSMs), such as linear recurrent neural networks and linear attention variants, offer computational advantages over Transformers but struggle with tasks requiring long-range in-context retrieval-like text copying, associative recall, and question answering over long contexts. Previous efforts to address these challenges have focused on architectural modifications, often reintroducing computational inefficiencies. In this paper, we propose a novel training procedure, Birdie, that significantly enhances the in-context retrieval capabilities of SSMs without altering their architecture. Our approach combines bidirectional input processing with dynamic mixtures of specialized pre-training objectives, optimized via reinforcement learning. We introduce a new bidirectional SSM architecture that seamlessly transitions from bidirectional context processing to causal generation. Experimental evaluations demonstrate that Birdie markedly improves performance on retrieval-intensive tasks such as multi-number phone book lookup, long paragraph question-answering, and infilling. This narrows the performance gap with Transformers, while retaining computational efficiency. Our findings highlight the importance of training procedures in leveraging the fixed-state capacity of SSMs, offering a new direction to advance their capabilities. All code and pre-trained models are available at https://www.github.com/samblouir/birdie, with support for JAX and PyTorch.</abstract><doi>10.48550/arxiv.2411.01030</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2411.01030
ispartof
issn
language eng
recordid cdi_arxiv_primary_2411_01030
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Computation and Language
Computer Science - Learning
title Birdie: Advancing State Space Models with Reward-Driven Objectives and Curricula
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-22T12%3A28%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Birdie:%20Advancing%20State%20Space%20Models%20with%20Reward-Driven%20Objectives%20and%20Curricula&rft.au=Blouir,%20Sam&rft.date=2024-11-01&rft_id=info:doi/10.48550/arxiv.2411.01030&rft_dat=%3Carxiv_GOX%3E2411_01030%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true