Time-Smoothed Gradients for Online Forecasting
Here, we study different update rules in stochastic gradient descent (SGD) for online forecasting problems. The selection of the learning rate parameter is critical in SGD. However, it may not be feasible to tune this parameter in online learning. Therefore, it is necessary to have an update rule th...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Here, we study different update rules in stochastic gradient descent (SGD)
for online forecasting problems. The selection of the learning rate parameter
is critical in SGD. However, it may not be feasible to tune this parameter in
online learning. Therefore, it is necessary to have an update rule that is not
sensitive to the selection of the learning parameter. Inspired by the local
regret metric that we introduced previously, we propose to use time-smoothed
gradients within SGD update. Using the public data set-- GEFCom2014, we
validate that our approach yields more stable results than the other existing
approaches. Furthermore, we show that such a simple approach is computationally
efficient compared to the alternatives. |
---|---|
DOI: | 10.48550/arxiv.1905.08850 |