Deep neural network approximation theory for high-dimensional functions
The purpose of this article is to develop machinery to study the capacity of deep neural networks (DNNs) to approximate high-dimensional functions. In particular, we show that DNNs have the expressive power to overcome the curse of dimensionality in the approximation of a large class of functions. M...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The purpose of this article is to develop machinery to study the capacity of
deep neural networks (DNNs) to approximate high-dimensional functions. In
particular, we show that DNNs have the expressive power to overcome the curse
of dimensionality in the approximation of a large class of functions. More
precisely, we prove that these functions can be approximated by DNNs on compact
sets such that the number of parameters necessary to represent the
approximating DNNs grows at most polynomially in the reciprocal $1/\varepsilon$
of the approximation accuracy $\varepsilon>0$ and in the input dimension $d\in
\mathbb{N} =\{1,2,3,\dots\}$. To this end, we introduce certain approximation
spaces, consisting of sequences of functions that can be efficiently
approximated by DNNs. We then establish closure properties which we combine
with known and new bounds on the number of parameters necessary to approximate
locally Lipschitz continuous functions, maximum functions, and product
functions by DNNs. The main result of this article demonstrates that DNNs have
sufficient expressiveness to approximate certain sequences of functions which
can be constructed by means of a finite number of compositions using locally
Lipschitz continuous functions, maxima, and products without the curse of
dimensionality. |
---|---|
DOI: | 10.48550/arxiv.2112.14523 |