An improved self-attention for long-sequence time-series data forecasting with missing values

Long-sequence time-series data forecasting based on deep learning has been applied in many practical scenarios. However, the time-series data sequences obtained in the real world inevitably contain missing values due to the failures of sensors or network fluctuations. Current research works dedicate...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Neural computing & applications 2024-03, Vol.36 (8), p.3921-3940
Hauptverfasser: Zhang, Zhi-cheng, Wang, Yong, Peng, Jian-jian, Duan, Jun-ting
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Long-sequence time-series data forecasting based on deep learning has been applied in many practical scenarios. However, the time-series data sequences obtained in the real world inevitably contain missing values due to the failures of sensors or network fluctuations. Current research works dedicate to imputing the incomplete time-series data sequence during the data preprocessing stage, which will lead to the problems of unsynchronized prediction and error accumulation. In this article, we propose an improved multi-headed self-attention mechanism, DecayAttention , which can be applied to the existing X-former models to handle the missing values in the time-series data sequences without decreasing their prediction accuracy. We apply DecayAttention to Transformer and two state-of-the-art X-former models, and the best prediction accuracy improves by 8.2%.
ISSN:0941-0643
1433-3058
DOI:10.1007/s00521-023-09347-6