Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers
We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in h...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2024-04 |
---|---|
Hauptverfasser: | , , , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Krzysztof Marcin Choromanski Li, Shanda Likhosherstov, Valerii Dubey, Kumar Avinava Luo, Shengjie He, Di Yang, Yiming Sarlos, Tamas Weingarten, Thomas Weller, Adrian |
description | We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2773472020</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2773472020</sourcerecordid><originalsourceid>FETCH-proquest_journals_27734720203</originalsourceid><addsrcrecordid>eNqNjbEKwjAURYMgWLT_8MC5EF9a4y4tDh1E6lxCTeWVmmhe6_ebQXB1uXc5596FSFCpXXbIEVciZR6klLjXWBQqEdfamuDI3cFA5edANkATjOPehwfEgJpcROBiRzPR28LZM03knRmhdJ2_RZeB3M-ygTdi2ZuRbfrttdhWZXM8Zc_gX7PlqR3iV5zgFrVWuUaJUv1HfQDO7ED-</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2773472020</pqid></control><display><type>article</type><title>Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers</title><source>Free E- Journals</source><creator>Krzysztof Marcin Choromanski ; Li, Shanda ; Likhosherstov, Valerii ; Dubey, Kumar Avinava ; Luo, Shengjie ; He, Di ; Yang, Yiming ; Sarlos, Tamas ; Weingarten, Thomas ; Weller, Adrian</creator><creatorcontrib>Krzysztof Marcin Choromanski ; Li, Shanda ; Likhosherstov, Valerii ; Dubey, Kumar Avinava ; Luo, Shengjie ; He, Di ; Yang, Yiming ; Sarlos, Tamas ; Weingarten, Thomas ; Weller, Adrian</creatorcontrib><description>We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Fourier transforms ; Image classification ; Learning ; Modelling ; Three dimensional models</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Krzysztof Marcin Choromanski</creatorcontrib><creatorcontrib>Li, Shanda</creatorcontrib><creatorcontrib>Likhosherstov, Valerii</creatorcontrib><creatorcontrib>Dubey, Kumar Avinava</creatorcontrib><creatorcontrib>Luo, Shengjie</creatorcontrib><creatorcontrib>He, Di</creatorcontrib><creatorcontrib>Yang, Yiming</creatorcontrib><creatorcontrib>Sarlos, Tamas</creatorcontrib><creatorcontrib>Weingarten, Thomas</creatorcontrib><creatorcontrib>Weller, Adrian</creatorcontrib><title>Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers</title><title>arXiv.org</title><description>We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking.</description><subject>Fourier transforms</subject><subject>Image classification</subject><subject>Learning</subject><subject>Modelling</subject><subject>Three dimensional models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjbEKwjAURYMgWLT_8MC5EF9a4y4tDh1E6lxCTeWVmmhe6_ebQXB1uXc5596FSFCpXXbIEVciZR6klLjXWBQqEdfamuDI3cFA5edANkATjOPehwfEgJpcROBiRzPR28LZM03knRmhdJ2_RZeB3M-ygTdi2ZuRbfrttdhWZXM8Zc_gX7PlqR3iV5zgFrVWuUaJUv1HfQDO7ED-</recordid><startdate>20240403</startdate><enddate>20240403</enddate><creator>Krzysztof Marcin Choromanski</creator><creator>Li, Shanda</creator><creator>Likhosherstov, Valerii</creator><creator>Dubey, Kumar Avinava</creator><creator>Luo, Shengjie</creator><creator>He, Di</creator><creator>Yang, Yiming</creator><creator>Sarlos, Tamas</creator><creator>Weingarten, Thomas</creator><creator>Weller, Adrian</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240403</creationdate><title>Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers</title><author>Krzysztof Marcin Choromanski ; Li, Shanda ; Likhosherstov, Valerii ; Dubey, Kumar Avinava ; Luo, Shengjie ; He, Di ; Yang, Yiming ; Sarlos, Tamas ; Weingarten, Thomas ; Weller, Adrian</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27734720203</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Fourier transforms</topic><topic>Image classification</topic><topic>Learning</topic><topic>Modelling</topic><topic>Three dimensional models</topic><toplevel>online_resources</toplevel><creatorcontrib>Krzysztof Marcin Choromanski</creatorcontrib><creatorcontrib>Li, Shanda</creatorcontrib><creatorcontrib>Likhosherstov, Valerii</creatorcontrib><creatorcontrib>Dubey, Kumar Avinava</creatorcontrib><creatorcontrib>Luo, Shengjie</creatorcontrib><creatorcontrib>He, Di</creatorcontrib><creatorcontrib>Yang, Yiming</creatorcontrib><creatorcontrib>Sarlos, Tamas</creatorcontrib><creatorcontrib>Weingarten, Thomas</creatorcontrib><creatorcontrib>Weller, Adrian</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Krzysztof Marcin Choromanski</au><au>Li, Shanda</au><au>Likhosherstov, Valerii</au><au>Dubey, Kumar Avinava</au><au>Luo, Shengjie</au><au>He, Di</au><au>Yang, Yiming</au><au>Sarlos, Tamas</au><au>Weingarten, Thomas</au><au>Weller, Adrian</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers</atitle><jtitle>arXiv.org</jtitle><date>2024-04-03</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>We propose a new class of linear Transformers called FourierLearner-Transformers (FLTs), which incorporate a wide range of relative positional encoding mechanisms (RPEs). These include regular RPE techniques applied for sequential data, as well as novel RPEs operating on geometric data embedded in higher-dimensional Euclidean spaces. FLTs construct the optimal RPE mechanism implicitly by learning its spectral representation. As opposed to other architectures combining efficient low-rank linear attention with RPEs, FLTs remain practical in terms of their memory usage and do not require additional assumptions about the structure of the RPE mask. Besides, FLTs allow for applying certain structural inductive bias techniques to specify masking strategies, e.g. they provide a way to learn the so-called local RPEs introduced in this paper and give accuracy gains as compared with several other linear Transformers for language modeling. We also thoroughly test FLTs on other data modalities and tasks, such as image classification, 3D molecular modeling, and learnable optimizers. To the best of our knowledge, for 3D molecular data, FLTs are the first Transformer architectures providing linear attention and incorporating RPE masking.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2024-04 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2773472020 |
source | Free E- Journals |
subjects | Fourier transforms Image classification Learning Modelling Three dimensional models |
title | Learning a Fourier Transform for Linear Relative Positional Encodings in Transformers |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-04T20%3A51%3A40IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Learning%20a%20Fourier%20Transform%20for%20Linear%20Relative%20Positional%20Encodings%20in%20Transformers&rft.jtitle=arXiv.org&rft.au=Krzysztof%20Marcin%20Choromanski&rft.date=2024-04-03&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2773472020%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2773472020&rft_id=info:pmid/&rfr_iscdi=true |