FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance

As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2022-03
Hauptverfasser: Xiao-Yang, Liu, Yang, Hongyang, Chen, Qian, Zhang, Runjia, Yang, Liuqing, Bowen, Xiao, Wang, Christina Dan
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Xiao-Yang, Liu
Yang, Hongyang
Chen, Qian
Zhang, Runjia
Yang, Liuqing
Bowen, Xiao
Wang, Christina Dan
description As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging. In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion. FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging workloads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces. Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation. The FinRL library will be available on Github at link https://github.com/AI4Finance-LLC/FinRL-Library.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2462534713</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2462534713</sourcerecordid><originalsourceid>FETCH-proquest_journals_24625347133</originalsourceid><addsrcrecordid>eNqNjEsKwjAUAIMgWLR3eOC6UJN-xF1Ri4tu_Ow1tq-Sal9qkgre3goewNUsZpgR87gQi2AZcT5hvrVNGIY8SXkcC49dckWHYgUZbBA7OKCiWpsSWyQHBUpDim5QqKuR5g2Dgqx3upUOKzg6Xd7hZGT1bRTBvpfklJNOvRCGsaQSZ2xcy4dF_8cpm-fb03oXdEY_e7Tu3Oje0KDOPEp4LKJ0IcR_1QdJ90P7</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2462534713</pqid></control><display><type>article</type><title>FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance</title><source>Free E- Journals</source><creator>Xiao-Yang, Liu ; Yang, Hongyang ; Chen, Qian ; Zhang, Runjia ; Yang, Liuqing ; Bowen, Xiao ; Wang, Christina Dan</creator><creatorcontrib>Xiao-Yang, Liu ; Yang, Hongyang ; Chen, Qian ; Zhang, Runjia ; Yang, Liuqing ; Bowen, Xiao ; Wang, Christina Dan</creatorcontrib><description>As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging. In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion. FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&amp;P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging workloads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces. Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation. The FinRL library will be available on Github at link https://github.com/AI4Finance-LLC/FinRL-Library.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Debugging ; Deep learning ; Finance ; Libraries ; Modular structures ; Neural networks ; Reproducibility ; Stock exchanges ; Virtual environments</subject><ispartof>arXiv.org, 2022-03</ispartof><rights>2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>776,780</link.rule.ids></links><search><creatorcontrib>Xiao-Yang, Liu</creatorcontrib><creatorcontrib>Yang, Hongyang</creatorcontrib><creatorcontrib>Chen, Qian</creatorcontrib><creatorcontrib>Zhang, Runjia</creatorcontrib><creatorcontrib>Yang, Liuqing</creatorcontrib><creatorcontrib>Bowen, Xiao</creatorcontrib><creatorcontrib>Wang, Christina Dan</creatorcontrib><title>FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance</title><title>arXiv.org</title><description>As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging. In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion. FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&amp;P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging workloads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces. Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation. The FinRL library will be available on Github at link https://github.com/AI4Finance-LLC/FinRL-Library.</description><subject>Algorithms</subject><subject>Debugging</subject><subject>Deep learning</subject><subject>Finance</subject><subject>Libraries</subject><subject>Modular structures</subject><subject>Neural networks</subject><subject>Reproducibility</subject><subject>Stock exchanges</subject><subject>Virtual environments</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2022</creationdate><recordtype>article</recordtype><sourceid>BENPR</sourceid><recordid>eNqNjEsKwjAUAIMgWLR3eOC6UJN-xF1Ri4tu_Ow1tq-Sal9qkgre3goewNUsZpgR87gQi2AZcT5hvrVNGIY8SXkcC49dckWHYgUZbBA7OKCiWpsSWyQHBUpDim5QqKuR5g2Dgqx3upUOKzg6Xd7hZGT1bRTBvpfklJNOvRCGsaQSZ2xcy4dF_8cpm-fb03oXdEY_e7Tu3Oje0KDOPEp4LKJ0IcR_1QdJ90P7</recordid><startdate>20220302</startdate><enddate>20220302</enddate><creator>Xiao-Yang, Liu</creator><creator>Yang, Hongyang</creator><creator>Chen, Qian</creator><creator>Zhang, Runjia</creator><creator>Yang, Liuqing</creator><creator>Bowen, Xiao</creator><creator>Wang, Christina Dan</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20220302</creationdate><title>FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance</title><author>Xiao-Yang, Liu ; Yang, Hongyang ; Chen, Qian ; Zhang, Runjia ; Yang, Liuqing ; Bowen, Xiao ; Wang, Christina Dan</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24625347133</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2022</creationdate><topic>Algorithms</topic><topic>Debugging</topic><topic>Deep learning</topic><topic>Finance</topic><topic>Libraries</topic><topic>Modular structures</topic><topic>Neural networks</topic><topic>Reproducibility</topic><topic>Stock exchanges</topic><topic>Virtual environments</topic><toplevel>online_resources</toplevel><creatorcontrib>Xiao-Yang, Liu</creatorcontrib><creatorcontrib>Yang, Hongyang</creatorcontrib><creatorcontrib>Chen, Qian</creatorcontrib><creatorcontrib>Zhang, Runjia</creatorcontrib><creatorcontrib>Yang, Liuqing</creatorcontrib><creatorcontrib>Bowen, Xiao</creatorcontrib><creatorcontrib>Wang, Christina Dan</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Xiao-Yang, Liu</au><au>Yang, Hongyang</au><au>Chen, Qian</au><au>Zhang, Runjia</au><au>Yang, Liuqing</au><au>Bowen, Xiao</au><au>Wang, Christina Dan</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance</atitle><jtitle>arXiv.org</jtitle><date>2022-03-02</date><risdate>2022</risdate><eissn>2331-8422</eissn><abstract>As deep reinforcement learning (DRL) has been recognized as an effective approach in quantitative finance, getting hands-on experiences is attractive to beginners. However, to train a practical DRL trading agent that decides where to trade, at what price, and what quantity involves error-prone and arduous development and debugging. In this paper, we introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance and to develop their own stock trading strategies. Along with easily-reproducible tutorials, FinRL library allows users to streamline their own developments and to compare with existing schemes easily. Within FinRL, virtual environments are configured with stock market datasets, trading agents are trained with neural networks, and extensive backtesting is analyzed via trading performance. Moreover, it incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion. FinRL is featured with completeness, hands-on tutorial and reproducibility that favors beginners: (i) at multiple levels of time granularity, FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&amp;P 500, HSI, SSE 50, and CSI 300; (ii) organized in a layered architecture with modular structure, FinRL provides fine-tuned state-of-the-art DRL algorithms (DQN, DDPG, PPO, SAC, A2C, TD3, etc.), commonly-used reward functions and standard evaluation baselines to alleviate the debugging workloads and promote the reproducibility, and (iii) being highly extendable, FinRL reserves a complete set of user-import interfaces. Furthermore, we incorporated three application demonstrations, namely single stock trading, multiple stock trading, and portfolio allocation. The FinRL library will be available on Github at link https://github.com/AI4Finance-LLC/FinRL-Library.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2022-03
issn 2331-8422
language eng
recordid cdi_proquest_journals_2462534713
source Free E- Journals
subjects Algorithms
Debugging
Deep learning
Finance
Libraries
Modular structures
Neural networks
Reproducibility
Stock exchanges
Virtual environments
title FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading in Quantitative Finance
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-30T19%3A22%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=FinRL:%20A%20Deep%20Reinforcement%20Learning%20Library%20for%20Automated%20Stock%20Trading%20in%20Quantitative%20Finance&rft.jtitle=arXiv.org&rft.au=Xiao-Yang,%20Liu&rft.date=2022-03-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2462534713%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2462534713&rft_id=info:pmid/&rfr_iscdi=true