Mini-Batch Learning Strategies for modeling long term temporal dependencies: A study in environmental applications
In many environmental applications, recurrent neural networks (RNNs) are often used to model physical variables with long temporal dependencies. However, due to mini-batch training, temporal relationships between training segments within the batch (intra-batch) as well as between batches (inter-batc...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In many environmental applications, recurrent neural networks (RNNs) are
often used to model physical variables with long temporal dependencies.
However, due to mini-batch training, temporal relationships between training
segments within the batch (intra-batch) as well as between batches
(inter-batch) are not considered, which can lead to limited performance.
Stateful RNNs aim to address this issue by passing hidden states between
batches. Since Stateful RNNs ignore intra-batch temporal dependency, there
exists a trade-off between training stability and capturing temporal
dependency. In this paper, we provide a quantitative comparison of different
Stateful RNN modeling strategies, and propose two strategies to enforce both
intra- and inter-batch temporal dependency. First, we extend Stateful RNNs by
defining a batch as a temporally ordered set of training segments, which
enables intra-batch sharing of temporal information. While this approach
significantly improves the performance, it leads to much larger training times
due to highly sequential training. To address this issue, we further propose a
new strategy which augments a training segment with an initial value of the
target variable from the timestep right before the starting of the training
segment. In other words, we provide an initial value of the target variable as
additional input so that the network can focus on learning changes relative to
that initial value. By using this strategy, samples can be passed in any order
(mini-batch training) which significantly reduces the training time while
maintaining the performance. In demonstrating our approach in hydrological
modeling, we observe that the most significant gains in predictive accuracy
occur when these methods are applied to state variables whose values change
more slowly, such as soil water and snowpack, rather than continuously moving
flux variables such as streamflow. |
---|---|
DOI: | 10.48550/arxiv.2210.08347 |