Automated market maker inventory management with deep reinforcement learning

Stock markets are the result of the interaction of multiple participants, and market makers are one of them. Their main goal is to provide liquidity and market depth to the stock market by streaming bids and offers at both sides of the order book, at different price levels. This activity allows the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Applied intelligence (Dordrecht, Netherlands) Netherlands), 2023-10, Vol.53 (19), p.22249-22266
Hauptverfasser: Vicente, Óscar Fernández, Fernández, Fernando, García, Javier
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Stock markets are the result of the interaction of multiple participants, and market makers are one of them. Their main goal is to provide liquidity and market depth to the stock market by streaming bids and offers at both sides of the order book, at different price levels. This activity allows the rest of the participants to have more available prices to buy or sell stocks. In the last years, reinforcement learning market maker agents have been able to be profitable. But profit is not the only measure to evaluate the quality of a market maker. Inventory management arises as a risk source that must be under control. In this paper, we focus on inventory risk management designing an adaptive reward function able to control inventory depending on designer preferences. To achieve this, we introduce two control coefficients, AIIF ( Alpha Inventory Impact Factor ) and DITF ( Dynamic Inventory Threshold Factor ), which modulate dynamically the behavior of the market maker agent according to its evolving liquidity with good results. In addition, we analyze the impact of these factors in the trading operative, detailing the underlying strategies performed by these intelligent agents in terms of operative, profitability and inventory management. Last, we present a comparison with other existing reward functions to illustrate the robustness of our approach. Graphic Abstract
ISSN:0924-669X
1573-7497
DOI:10.1007/s10489-023-04647-9