From continuous-time formulations to discretization schemes: tensor trains and robust regression for BSDEs and parabolic PDEs
The numerical approximation of partial differential equations (PDEs) poses formidable challenges in high dimensions since classical grid-based methods suffer from the so-called curse of dimensionality. Recent attempts rely on a combination of Monte Carlo methods and variational formulations, using n...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The numerical approximation of partial differential equations (PDEs) poses
formidable challenges in high dimensions since classical grid-based methods
suffer from the so-called curse of dimensionality. Recent attempts rely on a
combination of Monte Carlo methods and variational formulations, using neural
networks for function approximation. Extending previous work (Richter et al.,
2021), we argue that tensor trains provide an appealing framework for parabolic
PDEs: The combination of reformulations in terms of backward stochastic
differential equations and regression-type methods holds the promise of
leveraging latent low-rank structures, enabling both compression and efficient
computation. Emphasizing a continuous-time viewpoint, we develop iterative
schemes, which differ in terms of computational efficiency and robustness. We
demonstrate both theoretically and numerically that our methods can achieve a
favorable trade-off between accuracy and computational efficiency. While
previous methods have been either accurate or fast, we have identified a novel
numerical strategy that can often combine both of these aspects. |
---|---|
DOI: | 10.48550/arxiv.2307.15496 |