GLU Variants Improve Transformer

Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
1. Verfasser: Shazeer, Noam
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Shazeer, Noam
description Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformer (arXiv:1706.03762) sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations.
doi_str_mv 10.48550/arxiv.2002.05202
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_05202</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_05202</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-a9aa88153b29e4621b10508f21b6484a93e53ee4b83bd0f053842adb4abd10c63</originalsourceid><addsrcrecordid>eNotzssKgkAYhuHZtAjrAlo1N6D9c2xchpQFQhtrK__kCEIeGEPq7itr9b2rj4eQFYNIGqVgg_5ZjxEH4BEoDnxOaJpd6BV9je1joKem993oaO6xHarON84vyKzC--CW_w1IftjnyTHMzukp2WUh6i0PMUY0hilheeyk5swyUGCqT2hpJMbCKeGctEbYEipQwkiOpZVoSwY3LQKy_t1OxKL3dYP-VXypxUQVb05BNzo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>GLU Variants Improve Transformer</title><source>arXiv.org</source><creator>Shazeer, Noam</creator><creatorcontrib>Shazeer, Noam</creatorcontrib><description>Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformer (arXiv:1706.03762) sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations.</description><identifier>DOI: 10.48550/arxiv.2002.05202</identifier><language>eng</language><subject>Computer Science - Learning ; Computer Science - Neural and Evolutionary Computing ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.05202$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.05202$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Shazeer, Noam</creatorcontrib><title>GLU Variants Improve Transformer</title><description>Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformer (arXiv:1706.03762) sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations.</description><subject>Computer Science - Learning</subject><subject>Computer Science - Neural and Evolutionary Computing</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzssKgkAYhuHZtAjrAlo1N6D9c2xchpQFQhtrK__kCEIeGEPq7itr9b2rj4eQFYNIGqVgg_5ZjxEH4BEoDnxOaJpd6BV9je1joKem993oaO6xHarON84vyKzC--CW_w1IftjnyTHMzukp2WUh6i0PMUY0hilheeyk5swyUGCqT2hpJMbCKeGctEbYEipQwkiOpZVoSwY3LQKy_t1OxKL3dYP-VXypxUQVb05BNzo</recordid><startdate>20200212</startdate><enddate>20200212</enddate><creator>Shazeer, Noam</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200212</creationdate><title>GLU Variants Improve Transformer</title><author>Shazeer, Noam</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-a9aa88153b29e4621b10508f21b6484a93e53ee4b83bd0f053842adb4abd10c63</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Learning</topic><topic>Computer Science - Neural and Evolutionary Computing</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Shazeer, Noam</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Shazeer, Noam</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>GLU Variants Improve Transformer</atitle><date>2020-02-12</date><risdate>2020</risdate><abstract>Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformer (arXiv:1706.03762) sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations.</abstract><doi>10.48550/arxiv.2002.05202</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2002.05202
ispartof
issn
language eng
recordid cdi_arxiv_primary_2002_05202
source arXiv.org
subjects Computer Science - Learning
Computer Science - Neural and Evolutionary Computing
Statistics - Machine Learning
title GLU Variants Improve Transformer
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T13%3A28%3A07IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=GLU%20Variants%20Improve%20Transformer&rft.au=Shazeer,%20Noam&rft.date=2020-02-12&rft_id=info:doi/10.48550/arxiv.2002.05202&rft_dat=%3Carxiv_GOX%3E2002_05202%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true