Deep Reinforcement Learning for Quantitative Trading
Artificial Intelligence (AI) and Machine Learning (ML) are transforming the domain of Quantitative Trading (QT) through the deployment of advanced algorithms capable of sifting through extensive financial datasets to pinpoint lucrative investment openings. AI-driven models, particularly those employ...
Gespeichert in:
Hauptverfasser: | , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Artificial Intelligence (AI) and Machine Learning (ML) are transforming the
domain of Quantitative Trading (QT) through the deployment of advanced
algorithms capable of sifting through extensive financial datasets to pinpoint
lucrative investment openings. AI-driven models, particularly those employing
ML techniques such as deep learning and reinforcement learning, have shown
great prowess in predicting market trends and executing trades at a speed and
accuracy that far surpass human capabilities. Its capacity to automate critical
tasks, such as discerning market conditions and executing trading strategies,
has been pivotal. However, persistent challenges exist in current QT methods,
especially in effectively handling noisy and high-frequency financial data.
Striking a balance between exploration and exploitation poses another challenge
for AI-driven trading agents. To surmount these hurdles, our proposed solution,
QTNet, introduces an adaptive trading model that autonomously formulates QT
strategies through an intelligent trading agent. Incorporating deep
reinforcement learning (DRL) with imitative learning methodologies, we bolster
the proficiency of our model. To tackle the challenges posed by volatile
financial datasets, we conceptualize the QT mechanism within the framework of a
Partially Observable Markov Decision Process (POMDP). Moreover, by embedding
imitative learning, the model can capitalize on traditional trading tactics,
nurturing a balanced synergy between discovery and utilization. For a more
realistic simulation, our trading agent undergoes training using
minute-frequency data sourced from the live financial market. Experimental
findings underscore the model's proficiency in extracting robust market
features and its adaptability to diverse market conditions. |
---|---|
DOI: | 10.48550/arxiv.2312.15730 |