Towards Scalable and Stable Parallelization of Nonlinear RNNs
Conventional nonlinear RNNs are not naturally parallelizable across the sequence length, unlike transformers and linear RNNs. Lim et. al. (2024) therefore tackle parallelized evaluation of nonlinear RNNs, posing it as a fixed point problem solved with Newton's method. By deriving and applying a...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Conventional nonlinear RNNs are not naturally parallelizable across the
sequence length, unlike transformers and linear RNNs. Lim et. al. (2024)
therefore tackle parallelized evaluation of nonlinear RNNs, posing it as a
fixed point problem solved with Newton's method. By deriving and applying a
parallelized form of Newton's method, they achieve large speedups over
sequential evaluation. However, their approach inherits cubic computational
complexity and numerical instability. We tackle these weaknesses. To reduce the
computational complexity, we apply quasi-Newton approximations and show they
converge comparably, use less memory, and are faster, compared to full-Newton.
To stabilize Newton's method, we leverage a connection between Newton's method
damped with trust regions and Kalman smoothing. This connection allows us to
stabilize the iteration, per the trust region, and use efficient parallelized
Kalman algorithms to retain performance. We compare these methods empirically
and highlight use cases where each algorithm excels. |
---|---|
DOI: | 10.48550/arxiv.2407.19115 |