Approximate attention with MLP: a pruning strategy for attention-based model in multivariate time series forecasting
Attention-based architectures have become ubiquitous in time series forecasting tasks, including spatio-temporal (STF) and long-term time series forecasting (LTSF). Yet, our understanding of the reasons for their effectiveness remains limited. This work proposes a new way to understand self-attentio...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Attention-based architectures have become ubiquitous in time series
forecasting tasks, including spatio-temporal (STF) and long-term time series
forecasting (LTSF). Yet, our understanding of the reasons for their
effectiveness remains limited. This work proposes a new way to understand
self-attention networks: we have shown empirically that the entire attention
mechanism in the encoder can be reduced to an MLP formed by feedforward,
skip-connection, and layer normalization operations for temporal and/or spatial
modeling in multivariate time series forecasting. Specifically, the Q, K, and V
projection, the attention score calculation, the dot-product between the
attention score and the V, and the final projection can be removed from the
attention-based networks without significantly degrading the performance that
the given network remains the top-tier compared to other SOTA methods. For
spatio-temporal networks, the MLP-replace-attention network achieves a
reduction in FLOPS of $62.579\%$ with a loss in performance less than $2.5\%$;
for LTSF, a reduction in FLOPs of $42.233\%$ with a loss in performance less
than $2\%$. |
---|---|
DOI: | 10.48550/arxiv.2410.24023 |