SEA: Sparse Linear Attention with Estimated Attention Mask

The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lee, Heejun, Kim, Jina, Willette, Jeffrey, Hwang, Sung Ju
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lee, Heejun
Kim, Jina
Willette, Jeffrey
Hwang, Sung Ju
description The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous research has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowledge from a teacher's attention matrix and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches lose interpretability if they cannot produce full attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear attention, then subsequently creates a sparse attention matrix with a top-k selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show roughly two-fold worse perplexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B, providing interpretable attention matrix. We believe that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory.
doi_str_mv 10.48550/arxiv.2310.01777
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2310_01777</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2310_01777</sourcerecordid><originalsourceid>FETCH-LOGICAL-a677-252195292586a81bfc91eaeb39f1f47ff2b5a6c90b8442aac25c424db9ae4a063</originalsourceid><addsrcrecordid>eNpNj8tuwjAURL3pAlE-gFX9AwH7xo5jdhEKDymIRdhH18FWrbYpciwef89zwWqkGWl0DiFjziYil5JNMZz9cQLprWBcKTUgs7osZrQ-YOgtrXxnMdAiRttF_9_Rk4_ftOyj_8No92_DBvufT_Lh8Le3o1cOyW5R7uarpNou1_OiSjBTKgEJXEvQIPMMc25cq7lFa1LtuBPKOTASs1YzkwsBiC3IVoDYG41WIMvSIfl63j7gm0O4wYRLc5doHhLpFaLWQPc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>SEA: Sparse Linear Attention with Estimated Attention Mask</title><source>arXiv.org</source><creator>Lee, Heejun ; Kim, Jina ; Willette, Jeffrey ; Hwang, Sung Ju</creator><creatorcontrib>Lee, Heejun ; Kim, Jina ; Willette, Jeffrey ; Hwang, Sung Ju</creatorcontrib><description>The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous research has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowledge from a teacher's attention matrix and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches lose interpretability if they cannot produce full attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear attention, then subsequently creates a sparse attention matrix with a top-k selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show roughly two-fold worse perplexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B, providing interpretable attention matrix. We believe that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory.</description><identifier>DOI: 10.48550/arxiv.2310.01777</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning</subject><creationdate>2023-10</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2310.01777$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2310.01777$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lee, Heejun</creatorcontrib><creatorcontrib>Kim, Jina</creatorcontrib><creatorcontrib>Willette, Jeffrey</creatorcontrib><creatorcontrib>Hwang, Sung Ju</creatorcontrib><title>SEA: Sparse Linear Attention with Estimated Attention Mask</title><description>The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous research has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowledge from a teacher's attention matrix and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches lose interpretability if they cannot produce full attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear attention, then subsequently creates a sparse attention matrix with a top-k selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show roughly two-fold worse perplexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B, providing interpretable attention matrix. We believe that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpNj8tuwjAURL3pAlE-gFX9AwH7xo5jdhEKDymIRdhH18FWrbYpciwef89zwWqkGWl0DiFjziYil5JNMZz9cQLprWBcKTUgs7osZrQ-YOgtrXxnMdAiRttF_9_Rk4_ftOyj_8No92_DBvufT_Lh8Le3o1cOyW5R7uarpNou1_OiSjBTKgEJXEvQIPMMc25cq7lFa1LtuBPKOTASs1YzkwsBiC3IVoDYG41WIMvSIfl63j7gm0O4wYRLc5doHhLpFaLWQPc</recordid><startdate>20231002</startdate><enddate>20231002</enddate><creator>Lee, Heejun</creator><creator>Kim, Jina</creator><creator>Willette, Jeffrey</creator><creator>Hwang, Sung Ju</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20231002</creationdate><title>SEA: Sparse Linear Attention with Estimated Attention Mask</title><author>Lee, Heejun ; Kim, Jina ; Willette, Jeffrey ; Hwang, Sung Ju</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a677-252195292586a81bfc91eaeb39f1f47ff2b5a6c90b8442aac25c424db9ae4a063</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Lee, Heejun</creatorcontrib><creatorcontrib>Kim, Jina</creatorcontrib><creatorcontrib>Willette, Jeffrey</creatorcontrib><creatorcontrib>Hwang, Sung Ju</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lee, Heejun</au><au>Kim, Jina</au><au>Willette, Jeffrey</au><au>Hwang, Sung Ju</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>SEA: Sparse Linear Attention with Estimated Attention Mask</atitle><date>2023-10-02</date><risdate>2023</risdate><abstract>The transformer architecture has driven breakthroughs in recent years on tasks which require modeling pairwise relationships between sequential elements, as is the case in natural language understanding. However, long seqeuences pose a problem due to the quadratic complexity of the attention operation. Previous research has aimed to lower the complexity by sparsifying or linearly approximating the attention matrix. Yet, these approaches cannot straightforwardly distill knowledge from a teacher's attention matrix and often require complete retraining from scratch. Furthermore, previous sparse and linear approaches lose interpretability if they cannot produce full attention matrices. To address these challenges, we propose SEA: Sparse linear attention with an Estimated Attention mask. SEA estimates the attention matrix with linear complexity via kernel-based linear attention, then subsequently creates a sparse attention matrix with a top-k selection to perform a sparse attention operation. For language modeling tasks (Wikitext2), previous linear and sparse attention methods show roughly two-fold worse perplexity scores over the quadratic OPT-1.3B baseline, while SEA achieves better perplexity than OPT-1.3B, using roughly half the memory of OPT-1.3B, providing interpretable attention matrix. We believe that our work will have a large practical impact, as it opens the possibility of running large transformers on resource-limited devices with less memory.</abstract><doi>10.48550/arxiv.2310.01777</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2310.01777
ispartof
issn
language eng
recordid cdi_arxiv_primary_2310_01777
source arXiv.org
subjects Computer Science - Computation and Language
Computer Science - Learning
title SEA: Sparse Linear Attention with Estimated Attention Mask
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-20T10%3A00%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=SEA:%20Sparse%20Linear%20Attention%20with%20Estimated%20Attention%20Mask&rft.au=Lee,%20Heejun&rft.date=2023-10-02&rft_id=info:doi/10.48550/arxiv.2310.01777&rft_dat=%3Carxiv_GOX%3E2310_01777%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true