Efficient Attention using a Fixed-Size Memory Representation

The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory repr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Britz, Denny, Guan, Melody Y, Luong, Minh-Thang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Britz, Denny
Guan, Melody Y
Luong, Minh-Thang
description The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.
doi_str_mv 10.48550/arxiv.1707.00110
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1707_00110</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1707_00110</sourcerecordid><originalsourceid>FETCH-LOGICAL-a670-2457ae78d4ae364b092f0ec27f73ea66e48e759f15a9c08a9adf0716d36450513</originalsourceid><addsrcrecordid>eNotj81qwkAURmfjQmwfoKvOCyTeyfzcBLoJom3BUlD34Ta5IwMawyQt2qdvtK7O5vDxHSGeFKQmtxbmFM_hJ1UImAIoBVPxsvQ-1IHbQZbDMCKcWvndh3YvSa7CmZtkG35ZfvDxFC9yw13kftToKj6IiadDz493zsRutdwt3pL15-v7olwn5BCSzFgkxrwxxNqZLygyD1xn6FEzOccmZ7SFV5aKGnIqqPGAyjWjbMEqPRPP_7O3-1UXw5HipbpmVLcM_Qe6DUHg</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Efficient Attention using a Fixed-Size Memory Representation</title><source>arXiv.org</source><creator>Britz, Denny ; Guan, Melody Y ; Luong, Minh-Thang</creator><creatorcontrib>Britz, Denny ; Guan, Melody Y ; Luong, Minh-Thang</creatorcontrib><description>The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.</description><identifier>DOI: 10.48550/arxiv.1707.00110</identifier><language>eng</language><subject>Computer Science - Computation and Language</subject><creationdate>2017-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1707.00110$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1707.00110$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Britz, Denny</creatorcontrib><creatorcontrib>Guan, Melody Y</creatorcontrib><creatorcontrib>Luong, Minh-Thang</creatorcontrib><title>Efficient Attention using a Fixed-Size Memory Representation</title><description>The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.</description><subject>Computer Science - Computation and Language</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2017</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj81qwkAURmfjQmwfoKvOCyTeyfzcBLoJom3BUlD34Ta5IwMawyQt2qdvtK7O5vDxHSGeFKQmtxbmFM_hJ1UImAIoBVPxsvQ-1IHbQZbDMCKcWvndh3YvSa7CmZtkG35ZfvDxFC9yw13kftToKj6IiadDz493zsRutdwt3pL15-v7olwn5BCSzFgkxrwxxNqZLygyD1xn6FEzOccmZ7SFV5aKGnIqqPGAyjWjbMEqPRPP_7O3-1UXw5HipbpmVLcM_Qe6DUHg</recordid><startdate>20170701</startdate><enddate>20170701</enddate><creator>Britz, Denny</creator><creator>Guan, Melody Y</creator><creator>Luong, Minh-Thang</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20170701</creationdate><title>Efficient Attention using a Fixed-Size Memory Representation</title><author>Britz, Denny ; Guan, Melody Y ; Luong, Minh-Thang</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a670-2457ae78d4ae364b092f0ec27f73ea66e48e759f15a9c08a9adf0716d36450513</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Computer Science - Computation and Language</topic><toplevel>online_resources</toplevel><creatorcontrib>Britz, Denny</creatorcontrib><creatorcontrib>Guan, Melody Y</creatorcontrib><creatorcontrib>Luong, Minh-Thang</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Britz, Denny</au><au>Guan, Melody Y</au><au>Luong, Minh-Thang</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Efficient Attention using a Fixed-Size Memory Representation</atitle><date>2017-07-01</date><risdate>2017</risdate><abstract>The standard content-based attention mechanism typically used in sequence-to-sequence models is computationally expensive as it requires the comparison of large encoder and decoder states at each time step. In this work, we propose an alternative attention mechanism based on a fixed size memory representation that is more efficient. Our technique predicts a compact set of K attention contexts during encoding and lets the decoder compute an efficient lookup that does not need to consult the memory. We show that our approach performs on-par with the standard attention mechanism while yielding inference speedups of 20% for real-world translation tasks and more for tasks with longer sequences. By visualizing attention scores we demonstrate that our models learn distinct, meaningful alignments.</abstract><doi>10.48550/arxiv.1707.00110</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1707.00110
ispartof
issn
language eng
recordid cdi_arxiv_primary_1707_00110
source arXiv.org
subjects Computer Science - Computation and Language
title Efficient Attention using a Fixed-Size Memory Representation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-19T13%3A00%3A28IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Efficient%20Attention%20using%20a%20Fixed-Size%20Memory%20Representation&rft.au=Britz,%20Denny&rft.date=2017-07-01&rft_id=info:doi/10.48550/arxiv.1707.00110&rft_dat=%3Carxiv_GOX%3E1707_00110%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true