Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context
Transformers have a potential of learning longer-term dependency, but are limited by a fixed-length context in the setting of language modeling. We propose a novel neural architecture Transformer-XL that enables learning dependency beyond a fixed length without disrupting temporal coherence. It cons...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Dai, Zihang Yang, Zhilin Yang, Yiming Carbonell, Jaime Le, Quoc V Salakhutdinov, Ruslan |
description | Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch. |
doi_str_mv | 10.48550/arxiv.1901.02860 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1901_02860</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1901_02860</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1150-90b8f7bc5c4b61f69312f60a41a2753cf28f4fa3b7e860697aad19c5e2b5d08b3</originalsourceid><addsrcrecordid>eNotz71OwzAYhWEvDKhwAUz4Bhz8EzsOW4loQTJiKRJb9Dn5HCK1DnJMld49UJjO9uo8hNwIXpRWa34HaRmPhai5KLi0hl-S7S5BnMOUDpjYu7un65wx5vGI1EEcvmBA-jL1uJ_pA56m2FOgm3HBnjmMQ_6gzRQzLvmKXATYz3j9vyvytnncNU_MvW6fm7VjIITmrObehsp3uiu9EcHUSshgOJQCZKVVF6QNZQDlK_y5Z-oKoBd1p1F63XPr1Yrc_nXPlPYzjQdIp_aX1J5J6hsnp0X_</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context</title><source>arXiv.org</source><creator>Dai, Zihang ; Yang, Zhilin ; Yang, Yiming ; Carbonell, Jaime ; Le, Quoc V ; Salakhutdinov, Ruslan</creator><creatorcontrib>Dai, Zihang ; Yang, Zhilin ; Yang, Yiming ; Carbonell, Jaime ; Le, Quoc V ; Salakhutdinov, Ruslan</creatorcontrib><description>Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch.</description><identifier>DOI: 10.48550/arxiv.1901.02860</identifier><language>eng</language><subject>Computer Science - Computation and Language ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-01</creationdate><rights>http://creativecommons.org/licenses/by-nc-sa/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1150-90b8f7bc5c4b61f69312f60a41a2753cf28f4fa3b7e860697aad19c5e2b5d08b3</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1901.02860$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1901.02860$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Dai, Zihang</creatorcontrib><creatorcontrib>Yang, Zhilin</creatorcontrib><creatorcontrib>Yang, Yiming</creatorcontrib><creatorcontrib>Carbonell, Jaime</creatorcontrib><creatorcontrib>Le, Quoc V</creatorcontrib><creatorcontrib>Salakhutdinov, Ruslan</creatorcontrib><title>Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context</title><description>Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch.</description><subject>Computer Science - Computation and Language</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71OwzAYhWEvDKhwAUz4Bhz8EzsOW4loQTJiKRJb9Dn5HCK1DnJMld49UJjO9uo8hNwIXpRWa34HaRmPhai5KLi0hl-S7S5BnMOUDpjYu7un65wx5vGI1EEcvmBA-jL1uJ_pA56m2FOgm3HBnjmMQ_6gzRQzLvmKXATYz3j9vyvytnncNU_MvW6fm7VjIITmrObehsp3uiu9EcHUSshgOJQCZKVVF6QNZQDlK_y5Z-oKoBd1p1F63XPr1Yrc_nXPlPYzjQdIp_aX1J5J6hsnp0X_</recordid><startdate>20190109</startdate><enddate>20190109</enddate><creator>Dai, Zihang</creator><creator>Yang, Zhilin</creator><creator>Yang, Yiming</creator><creator>Carbonell, Jaime</creator><creator>Le, Quoc V</creator><creator>Salakhutdinov, Ruslan</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190109</creationdate><title>Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context</title><author>Dai, Zihang ; Yang, Zhilin ; Yang, Yiming ; Carbonell, Jaime ; Le, Quoc V ; Salakhutdinov, Ruslan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1150-90b8f7bc5c4b61f69312f60a41a2753cf28f4fa3b7e860697aad19c5e2b5d08b3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Computation and Language</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Dai, Zihang</creatorcontrib><creatorcontrib>Yang, Zhilin</creatorcontrib><creatorcontrib>Yang, Yiming</creatorcontrib><creatorcontrib>Carbonell, Jaime</creatorcontrib><creatorcontrib>Le, Quoc V</creatorcontrib><creatorcontrib>Salakhutdinov, Ruslan</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Dai, Zihang</au><au>Yang, Zhilin</au><au>Yang, Yiming</au><au>Carbonell, Jaime</au><au>Le, Quoc V</au><au>Salakhutdinov, Ruslan</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context</atitle><date>2019-01-09</date><risdate>2019</risdate><abstract>Transformers have a potential of learning longer-term dependency, but are
limited by a fixed-length context in the setting of language modeling. We
propose a novel neural architecture Transformer-XL that enables learning
dependency beyond a fixed length without disrupting temporal coherence. It
consists of a segment-level recurrence mechanism and a novel positional
encoding scheme. Our method not only enables capturing longer-term dependency,
but also resolves the context fragmentation problem. As a result,
Transformer-XL learns dependency that is 80% longer than RNNs and 450% longer
than vanilla Transformers, achieves better performance on both short and long
sequences, and is up to 1,800+ times faster than vanilla Transformers during
evaluation. Notably, we improve the state-of-the-art results of bpc/perplexity
to 0.99 on enwiki8, 1.08 on text8, 18.3 on WikiText-103, 21.8 on One Billion
Word, and 54.5 on Penn Treebank (without finetuning). When trained only on
WikiText-103, Transformer-XL manages to generate reasonably coherent, novel
text articles with thousands of tokens. Our code, pretrained models, and
hyperparameters are available in both Tensorflow and PyTorch.</abstract><doi>10.48550/arxiv.1901.02860</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1901.02860 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1901_02860 |
source | arXiv.org |
subjects | Computer Science - Computation and Language Computer Science - Learning Statistics - Machine Learning |
title | Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-22T10%3A58%3A18IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Transformer-XL:%20Attentive%20Language%20Models%20Beyond%20a%20Fixed-Length%20Context&rft.au=Dai,%20Zihang&rft.date=2019-01-09&rft_id=info:doi/10.48550/arxiv.1901.02860&rft_dat=%3Carxiv_GOX%3E1901_02860%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |