Multiscale spatial‐temporal transformer with consistency representation learning for multivariate time series classification

Summary Multivariate time series classification holds significant importance in fields such as healthcare, energy management, and industrial manufacturing. Existing research focuses on capturing temporal changes or calculating time similarities to accomplish classification tasks. However, as the sta...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Concurrency and computation 2024-12, Vol.36 (27), p.n/a
Hauptverfasser: Wu, Wei, Qiu, Feiyue, Wang, Liping, Liu, Yanxiu
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Summary Multivariate time series classification holds significant importance in fields such as healthcare, energy management, and industrial manufacturing. Existing research focuses on capturing temporal changes or calculating time similarities to accomplish classification tasks. However, as the state of the system changes, capturing spatial‐temporal consistency within multivariate time series is key to the ability of the model to classify accurately. This paper proposes the MSTformer model, specifically designed for multivariate time series classification tasks. Based on the Transformer architecture, this model uniquely focuses on multiscale information across both time and feature dimensions. The encoder, through a designed learnable multiscale attention mechanism, divides data into sequences of varying temporal scales to learn multiscale temporal features. The decoder, which receives the spatial view of the data, utilizes a dynamic scale attention mechanism to learn spatial‐temporal consistency in a one‐dimensional space. In addition, this paper proposes an adaptive aggregation mechanism to synchronize and combine the outputs of the encoder and decoder. It also introduces a multiscale 2D separable convolution designed to learn spatial‐temporal consistency in two‐dimensional space, enhancing the ability of the model to learn spatial‐temporal consistency representation. Extensive experiments were conducted on 30 datasets, where the MSTformer outperformed other models with an average accuracy rate of 85.6%. Ablation studies further demonstrate the reliability and stability of MSTformer.
ISSN:1532-0626
1532-0634
DOI:10.1002/cpe.8234