TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables
Deep models have demonstrated remarkable performance in time series forecasting. However, due to the partially-observed nature of real-world applications, solely focusing on the target of interest, so-called endogenous variables, is usually insufficient to guarantee accurate forecasting. Notably, a...
Gespeichert in:
Hauptverfasser: | , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Wang, Yuxuan Wu, Haixu Dong, Jiaxiang Qin, Guo Zhang, Haoran Liu, Yong Qiu, Yunzhong Wang, Jianmin Long, Mingsheng |
description | Deep models have demonstrated remarkable performance in time series
forecasting. However, due to the partially-observed nature of real-world
applications, solely focusing on the target of interest, so-called endogenous
variables, is usually insufficient to guarantee accurate forecasting. Notably,
a system is often recorded into multiple variables, where the exogenous
variables can provide valuable external information for endogenous variables.
Thus, unlike well-established multivariate or univariate forecasting paradigms
that either treat all the variables equally or ignore exogenous information,
this paper focuses on a more practical setting: time series forecasting with
exogenous variables. We propose a novel approach, TimeXer, to ingest external
information to enhance the forecasting of endogenous variables. With deftly
designed embedding layers, TimeXer empowers the canonical Transformer with the
ability to reconcile endogenous and exogenous information, where patch-wise
self-attention and variate-wise cross-attention are used simultaneously.
Moreover, global endogenous tokens are learned to effectively bridge the causal
information underlying exogenous series into endogenous temporal patches.
Experimentally, TimeXer achieves consistent state-of-the-art performance on
twelve real-world forecasting benchmarks and exhibits notable generality and
scalability. Code is available at this repository:
https://github.com/thuml/TimeXer. |
doi_str_mv | 10.48550/arxiv.2402.19072 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2402_19072</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2402_19072</sourcerecordid><originalsourceid>FETCH-arxiv_primary_2402_190723</originalsourceid><addsrcrecordid>eNqFjrsOgkAQAK-xMOoHWLk_IJ4I8dEaiL1o7MhqFryE48guCv69QOytppgpRqn5WnvBLgz1Crk1b88PtO-t93rrj9UlMZZuxAeIbOUaYlPmkDCWkjm2xAIdoY_g3EkSiB3TA6Xuw8bUT4hal1PpXgJXZIP3gmSqRhkWQrMfJ2oRR8nxtBwG0oqNRf6k_Ug6jGz-F1-BjT7d</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables</title><source>arXiv.org</source><creator>Wang, Yuxuan ; Wu, Haixu ; Dong, Jiaxiang ; Qin, Guo ; Zhang, Haoran ; Liu, Yong ; Qiu, Yunzhong ; Wang, Jianmin ; Long, Mingsheng</creator><creatorcontrib>Wang, Yuxuan ; Wu, Haixu ; Dong, Jiaxiang ; Qin, Guo ; Zhang, Haoran ; Liu, Yong ; Qiu, Yunzhong ; Wang, Jianmin ; Long, Mingsheng</creatorcontrib><description>Deep models have demonstrated remarkable performance in time series
forecasting. However, due to the partially-observed nature of real-world
applications, solely focusing on the target of interest, so-called endogenous
variables, is usually insufficient to guarantee accurate forecasting. Notably,
a system is often recorded into multiple variables, where the exogenous
variables can provide valuable external information for endogenous variables.
Thus, unlike well-established multivariate or univariate forecasting paradigms
that either treat all the variables equally or ignore exogenous information,
this paper focuses on a more practical setting: time series forecasting with
exogenous variables. We propose a novel approach, TimeXer, to ingest external
information to enhance the forecasting of endogenous variables. With deftly
designed embedding layers, TimeXer empowers the canonical Transformer with the
ability to reconcile endogenous and exogenous information, where patch-wise
self-attention and variate-wise cross-attention are used simultaneously.
Moreover, global endogenous tokens are learned to effectively bridge the causal
information underlying exogenous series into endogenous temporal patches.
Experimentally, TimeXer achieves consistent state-of-the-art performance on
twelve real-world forecasting benchmarks and exhibits notable generality and
scalability. Code is available at this repository:
https://github.com/thuml/TimeXer.</description><identifier>DOI: 10.48550/arxiv.2402.19072</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2024-02</creationdate><rights>http://creativecommons.org/licenses/by-nc-nd/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,781,886</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2402.19072$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2402.19072$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Wang, Yuxuan</creatorcontrib><creatorcontrib>Wu, Haixu</creatorcontrib><creatorcontrib>Dong, Jiaxiang</creatorcontrib><creatorcontrib>Qin, Guo</creatorcontrib><creatorcontrib>Zhang, Haoran</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><creatorcontrib>Qiu, Yunzhong</creatorcontrib><creatorcontrib>Wang, Jianmin</creatorcontrib><creatorcontrib>Long, Mingsheng</creatorcontrib><title>TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables</title><description>Deep models have demonstrated remarkable performance in time series
forecasting. However, due to the partially-observed nature of real-world
applications, solely focusing on the target of interest, so-called endogenous
variables, is usually insufficient to guarantee accurate forecasting. Notably,
a system is often recorded into multiple variables, where the exogenous
variables can provide valuable external information for endogenous variables.
Thus, unlike well-established multivariate or univariate forecasting paradigms
that either treat all the variables equally or ignore exogenous information,
this paper focuses on a more practical setting: time series forecasting with
exogenous variables. We propose a novel approach, TimeXer, to ingest external
information to enhance the forecasting of endogenous variables. With deftly
designed embedding layers, TimeXer empowers the canonical Transformer with the
ability to reconcile endogenous and exogenous information, where patch-wise
self-attention and variate-wise cross-attention are used simultaneously.
Moreover, global endogenous tokens are learned to effectively bridge the causal
information underlying exogenous series into endogenous temporal patches.
Experimentally, TimeXer achieves consistent state-of-the-art performance on
twelve real-world forecasting benchmarks and exhibits notable generality and
scalability. Code is available at this repository:
https://github.com/thuml/TimeXer.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNqFjrsOgkAQAK-xMOoHWLk_IJ4I8dEaiL1o7MhqFryE48guCv69QOytppgpRqn5WnvBLgz1Crk1b88PtO-t93rrj9UlMZZuxAeIbOUaYlPmkDCWkjm2xAIdoY_g3EkSiB3TA6Xuw8bUT4hal1PpXgJXZIP3gmSqRhkWQrMfJ2oRR8nxtBwG0oqNRf6k_Ug6jGz-F1-BjT7d</recordid><startdate>20240229</startdate><enddate>20240229</enddate><creator>Wang, Yuxuan</creator><creator>Wu, Haixu</creator><creator>Dong, Jiaxiang</creator><creator>Qin, Guo</creator><creator>Zhang, Haoran</creator><creator>Liu, Yong</creator><creator>Qiu, Yunzhong</creator><creator>Wang, Jianmin</creator><creator>Long, Mingsheng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20240229</creationdate><title>TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables</title><author>Wang, Yuxuan ; Wu, Haixu ; Dong, Jiaxiang ; Qin, Guo ; Zhang, Haoran ; Liu, Yong ; Qiu, Yunzhong ; Wang, Jianmin ; Long, Mingsheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-arxiv_primary_2402_190723</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Wang, Yuxuan</creatorcontrib><creatorcontrib>Wu, Haixu</creatorcontrib><creatorcontrib>Dong, Jiaxiang</creatorcontrib><creatorcontrib>Qin, Guo</creatorcontrib><creatorcontrib>Zhang, Haoran</creatorcontrib><creatorcontrib>Liu, Yong</creatorcontrib><creatorcontrib>Qiu, Yunzhong</creatorcontrib><creatorcontrib>Wang, Jianmin</creatorcontrib><creatorcontrib>Long, Mingsheng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Wang, Yuxuan</au><au>Wu, Haixu</au><au>Dong, Jiaxiang</au><au>Qin, Guo</au><au>Zhang, Haoran</au><au>Liu, Yong</au><au>Qiu, Yunzhong</au><au>Wang, Jianmin</au><au>Long, Mingsheng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables</atitle><date>2024-02-29</date><risdate>2024</risdate><abstract>Deep models have demonstrated remarkable performance in time series
forecasting. However, due to the partially-observed nature of real-world
applications, solely focusing on the target of interest, so-called endogenous
variables, is usually insufficient to guarantee accurate forecasting. Notably,
a system is often recorded into multiple variables, where the exogenous
variables can provide valuable external information for endogenous variables.
Thus, unlike well-established multivariate or univariate forecasting paradigms
that either treat all the variables equally or ignore exogenous information,
this paper focuses on a more practical setting: time series forecasting with
exogenous variables. We propose a novel approach, TimeXer, to ingest external
information to enhance the forecasting of endogenous variables. With deftly
designed embedding layers, TimeXer empowers the canonical Transformer with the
ability to reconcile endogenous and exogenous information, where patch-wise
self-attention and variate-wise cross-attention are used simultaneously.
Moreover, global endogenous tokens are learned to effectively bridge the causal
information underlying exogenous series into endogenous temporal patches.
Experimentally, TimeXer achieves consistent state-of-the-art performance on
twelve real-world forecasting benchmarks and exhibits notable generality and
scalability. Code is available at this repository:
https://github.com/thuml/TimeXer.</abstract><doi>10.48550/arxiv.2402.19072</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2402.19072 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2402_19072 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | TimeXer: Empowering Transformers for Time Series Forecasting with Exogenous Variables |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-18T12%3A45%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=TimeXer:%20Empowering%20Transformers%20for%20Time%20Series%20Forecasting%20with%20Exogenous%20Variables&rft.au=Wang,%20Yuxuan&rft.date=2024-02-29&rft_id=info:doi/10.48550/arxiv.2402.19072&rft_dat=%3Carxiv_GOX%3E2402_19072%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |