On universal approximation and error bounds for Fourier Neural Operators
Journal of Machine Learning Research 22 (2021) 1-76 Fourier neural operators (FNOs) have recently been proposed as an effective framework for learning operators that map between infinite-dimensional spaces. We prove that FNOs are universal, in the sense that they can approximate any continuous opera...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Journal of Machine Learning Research 22 (2021) 1-76 Fourier neural operators (FNOs) have recently been proposed as an effective
framework for learning operators that map between infinite-dimensional spaces.
We prove that FNOs are universal, in the sense that they can approximate any
continuous operator to desired accuracy. Moreover, we suggest a mechanism by
which FNOs can approximate operators associated with PDEs efficiently. Explicit
error bounds are derived to show that the size of the FNO, approximating
operators associated with a Darcy type elliptic PDE and with the incompressible
Navier-Stokes equations of fluid dynamics, only increases sub (log)-linearly in
terms of the reciprocal of the error. Thus, FNOs are shown to efficiently
approximate operators arising in a large class of PDEs. |
---|---|
DOI: | 10.48550/arxiv.2107.07562 |