Deep Empirical Risk Minimization in finance: looking into the future

Many modern computational approaches to classical problems in quantitative finance are formulated as empirical loss minimization (ERM), allowing direct applications of classical results from statistical machine learning. These methods, designed to directly construct the optimal feedback representati...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Reppen, A. Max, Soner, H. Mete
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Many modern computational approaches to classical problems in quantitative finance are formulated as empirical loss minimization (ERM), allowing direct applications of classical results from statistical machine learning. These methods, designed to directly construct the optimal feedback representation of hedging or investment decisions, are analyzed in this framework demonstrating their effectiveness as well as their susceptibility to generalization error. Use of classical techniques shows that over-training renders trained investment decisions to become anticipative, and proves overlearning for large hypothesis spaces. On the other hand, non-asymptotic estimates based on Rademacher complexity show the convergence for sufficiently large training sets. These results emphasize the importance of synthetic data generation and the appropriate calibration of complex models to market data. A numerically studied stylized example illustrates these possibilities, including the importance of problem dimension in the degree of overlearning, and the effectiveness of this approach.
DOI:10.48550/arxiv.2011.09349