RSMformer: an efficient multiscale transformer-based framework for long sequence time-series forecasting

Long sequence time-series forecasting (LSTF) is a significant and challenging task. Many real-world applications require long-term forecasting of time series. In recent years, Transformer-based models have emerged as a promising solution for addressing LSTF tasks. Nevertheless, the model’s performan...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2024-01, Vol.54 (2), p.1275-1296
Hauptverfasser: Tong, Guoxiang, Ge, Zhaoyuan, Peng, Dunlu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Long sequence time-series forecasting (LSTF) is a significant and challenging task. Many real-world applications require long-term forecasting of time series. In recent years, Transformer-based models have emerged as a promising solution for addressing LSTF tasks. Nevertheless, the model’s performance is constrained by several issues, including the single time scale, the quadratic calculation complexity of the self-attention mechanism, and the high memory occupation. Based on the limitations mentioned above, we propose a novel approach in this paper, namely the multiscale residual sparse attention model RSMformer, built upon the Transformer architecture. Firstly, a residual sparse attention (RSA) mechanism is devised to select dominant queries for computation, utilizing the attention sparsity criterion. This approach effectively reduces the computational complexity to O ( L log L ). Secondly, we employ a multiscale forecasting strategy to iteratively refine the accuracy of prediction results at multiple scales by utilizing up-and-down sampling techniques and cross-scale centralization schemes, which effectively capture the temporal dependencies at different time scales. Extensive experiments on six publicly available datasets show that RSMformer performs significantly better than the compared state-of-the-art benchmarks and excels in the LSTF tasks.
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-023-05250-8