LeMoLE: LLM-Enhanced Mixture of Linear Experts for Time Series Forecasting
Recent research has shown that large language models (LLMs) can be effectively used for real-world time series forecasting due to their strong natural language understanding capabilities. However, aligning time series into semantic spaces of LLMs comes with high computational costs and inference com...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent research has shown that large language models (LLMs) can be
effectively used for real-world time series forecasting due to their strong
natural language understanding capabilities. However, aligning time series into
semantic spaces of LLMs comes with high computational costs and inference
complexity, particularly for long-range time series generation. Building on
recent advancements in using linear models for time series, this paper
introduces an LLM-enhanced mixture of linear experts for precise and efficient
time series forecasting. This approach involves developing a mixture of linear
experts with multiple lookback lengths and a new multimodal fusion mechanism.
The use of a mixture of linear experts is efficient due to its simplicity,
while the multimodal fusion mechanism adaptively combines multiple linear
experts based on the learned features of the text modality from pre-trained
large language models. In experiments, we rethink the need to align time series
to LLMs by existing time-series large language models and further discuss their
efficiency and effectiveness in time series forecasting. Our experimental
results show that the proposed LeMoLE model presents lower prediction errors
and higher computational efficiency than existing LLM models. |
---|---|
DOI: | 10.48550/arxiv.2412.00053 |