LETS-C: Leveraging Language Embedding for Time Series Classification
Recent advancements in language modeling have shown promising results when applied to time series data. In particular, fine-tuning pre-trained large language models (LLMs) for time series classification tasks has achieved state-of-the-art (SOTA) performance on standard benchmarks. However, these LLM...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advancements in language modeling have shown promising results when
applied to time series data. In particular, fine-tuning pre-trained large
language models (LLMs) for time series classification tasks has achieved
state-of-the-art (SOTA) performance on standard benchmarks. However, these
LLM-based models have a significant drawback due to the large model size, with
the number of trainable parameters in the millions. In this paper, we propose
an alternative approach to leveraging the success of language modeling in the
time series domain. Instead of fine-tuning LLMs, we utilize a language
embedding model to embed time series and then pair the embeddings with a simple
classification head composed of convolutional neural networks (CNN) and
multilayer perceptron (MLP). We conducted extensive experiments on
well-established time series classification benchmark datasets. We demonstrated
LETS-C not only outperforms the current SOTA in classification accuracy but
also offers a lightweight solution, using only 14.5% of the trainable
parameters on average compared to the SOTA model. Our findings suggest that
leveraging language encoders to embed time series data, combined with a simple
yet effective classification head, offers a promising direction for achieving
high-performance time series classification while maintaining a lightweight
model architecture. |
---|---|
DOI: | 10.48550/arxiv.2407.06533 |