TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling
Time series pre-training has recently garnered wide attention for its potential to reduce labeling expenses and benefit various downstream tasks. Prior methods are mainly based on pre-training techniques well-acknowledged in vision or language, such as masked modeling and contrastive learning. Howev...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Time series pre-training has recently garnered wide attention for its
potential to reduce labeling expenses and benefit various downstream tasks.
Prior methods are mainly based on pre-training techniques well-acknowledged in
vision or language, such as masked modeling and contrastive learning. However,
randomly masking time series or calculating series-wise similarity will distort
or neglect inherent temporal correlations crucial in time series data. To
emphasize temporal correlation modeling, this paper proposes TimeSiam as a
simple but effective self-supervised pre-training framework for Time series
based on Siamese networks. Concretely, TimeSiam pre-trains Siamese encoders to
capture intrinsic temporal correlations between randomly sampled past and
current subseries. With a simple data augmentation method (e.g.~masking),
TimeSiam can benefit from diverse augmented subseries and learn internal
time-dependent representations through a past-to-current reconstruction.
Moreover, learnable lineage embeddings are also introduced to distinguish
temporal distance between sampled series and further foster the learning of
diverse temporal correlations. TimeSiam consistently outperforms extensive
advanced pre-training baselines, demonstrating superior forecasting and
classification capabilities across 13 standard benchmarks in both intra- and
cross-domain scenarios. |
---|---|
DOI: | 10.48550/arxiv.2402.02475 |