Rank Bounds for Approximating Gaussian Densities in the Tensor-Train Format

Low-rank tensor approximations have shown great potential for uncertainty quantification in high dimensions, for example, to build surrogate models that can be used to speed up large-scale inference problems (Eigel et al., Inverse Problems 34, 2018; Dolgov et al., Statistics & Computing 30, 2020...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-11
Hauptverfasser: Rohrbach, Paul B, Dolgov, Sergey, Grasedyck, Lars, Scheichl, Robert
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Low-rank tensor approximations have shown great potential for uncertainty quantification in high dimensions, for example, to build surrogate models that can be used to speed up large-scale inference problems (Eigel et al., Inverse Problems 34, 2018; Dolgov et al., Statistics & Computing 30, 2020). The feasibility and efficiency of such approaches depends critically on the rank that is necessary to represent or approximate the underlying distribution. In this paper, a-priori rank bounds for approximations in the functional tensor-train representation for the case of Gaussian models are developed. It is shown that under suitable conditions on the precision matrix, the Gaussian density can be approximated to high accuracy without suffering from an exponential growth of complexity as the dimension increases. These results provide a rigorous justification of the suitability and the limitations of low-rank tensor methods in a simple but important model case. Numerical experiments confirm that the rank bounds capture the qualitative behavior of the rank structure when varying the parameters of the precision matrix and the accuracy of the approximation. Finally, the practical relevance of the theoretical results is demonstrated in the context of a Bayesian filtering problem.
ISSN:2331-8422