Learning K-U-Net with constant complexity: An Application to time series forecasting
Training deep models for time series forecasting is a critical task with an inherent challenge of time complexity. While current methods generally ensure linear time complexity, our observations on temporal redundancy show that high-level features are learned 98.44\% slower than low-level features....
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Training deep models for time series forecasting is a critical task with an
inherent challenge of time complexity. While current methods generally ensure
linear time complexity, our observations on temporal redundancy show that
high-level features are learned 98.44\% slower than low-level features. To
address this issue, we introduce a new exponentially weighted stochastic
gradient descent algorithm designed to achieve constant time complexity in deep
learning models. We prove that the theoretical complexity of this learning
method is constant. Evaluation of this method on Kernel U-Net (K-U-Net) on
synthetic datasets shows a significant reduction in complexity while improving
the accuracy of the test set. |
---|---|
DOI: | 10.48550/arxiv.2410.02438 |