A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms
Meta-learning of numerical algorithms for a given task consists of the data-driven identification and adaptation of an algorithmic structure and the associated hyperparameters. To limit the complexity of the meta-learning problem, neural architectures with a certain inductive bias towards favorable...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Meta-learning of numerical algorithms for a given task consists of the
data-driven identification and adaptation of an algorithmic structure and the
associated hyperparameters. To limit the complexity of the meta-learning
problem, neural architectures with a certain inductive bias towards favorable
algorithmic structures can, and should, be used. We generalize our previously
introduced Runge-Kutta neural network to a recursively recurrent neural network
(R2N2) superstructure for the design of customized iterative algorithms. In
contrast to off-the-shelf deep learning approaches, it features a distinct
division into modules for generation of information and for the subsequent
assembly of this information towards a solution. Local information in the form
of a subspace is generated by subordinate, inner, iterations of recurrent
function evaluations starting at the current outer iterate. The update to the
next outer iterate is computed as a linear combination of these evaluations,
reducing the residual in this space, and constitutes the output of the network.
We demonstrate that regular training of the weight parameters inside the
proposed superstructure on input/output data of various computational problem
classes yields iterations similar to Krylov solvers for linear equation
systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta
integrators for ordinary differential equations. Due to its modularity, the
superstructure can be readily extended with functionalities needed to represent
more general classes of iterative algorithms traditionally based on Taylor
series expansions. |
---|---|
DOI: | 10.48550/arxiv.2211.12386 |