Action Pick-up in Dynamic Action Space Reinforcement Learning
Most reinforcement learning algorithms are based on a key assumption that Markov decision processes (MDPs) are stationary. However, non-stationary MDPs with dynamic action space are omnipresent in real-world scenarios. Yet problems of dynamic action space reinforcement learning have been studied by...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Ye, Jiaqi Li, Xiaodong Wu, Pangjing Wang, Feng |
description | Most reinforcement learning algorithms are based on a key assumption that
Markov decision processes (MDPs) are stationary. However, non-stationary MDPs
with dynamic action space are omnipresent in real-world scenarios. Yet problems
of dynamic action space reinforcement learning have been studied by many
previous works, how to choose valuable actions from new and unseen actions to
improve learning efficiency remains unaddressed. To tackle this problem, we
propose an intelligent Action Pick-up (AP) algorithm to autonomously choose
valuable actions that are most likely to boost performance from a set of new
actions. In this paper, we first theoretically analyze and find that a prior
optimal policy plays an important role in action pick-up by providing useful
knowledge and experience. Then, we design two different AP methods:
frequency-based global method and state clustering-based local method, based on
the prior optimal policy. Finally, we evaluate the AP on two simulated but
challenging environments where action spaces vary over time. Experimental
results demonstrate that our proposed AP has advantages over baselines in
learning efficiency. |
doi_str_mv | 10.48550/arxiv.2304.00873 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2304_00873</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2304_00873</sourcerecordid><originalsourceid>FETCH-LOGICAL-a673-cdad0ca7c5a660f83461c6a74fb1a00bb42395172b0d1771d5136c07914dd8863</originalsourceid><addsrcrecordid>eNotj71uwjAURr10QLQPwFS_QMJ1_MvQAUELSJFatezRzbVTWSUmMrQqb4_4mc7wSUffYWwioFROa5hi_o9_ZSVBlQDOyhF7mdMx7hP_iPRT_A48Jr48Jewj8fvyNSAF_hli6vaZQh_SkdcBc4rp-5E9dLg7hKc7x2z79rpdrIv6fbVZzOsCjZUFefRAaEmjMdA5qYwgg1Z1rUCAtlWVnGlhqxa8sFZ4LaQhsDOhvHfOyDF7vmmv_5shxx7zqbl0NNcOeQaiUEGA</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Action Pick-up in Dynamic Action Space Reinforcement Learning</title><source>arXiv.org</source><creator>Ye, Jiaqi ; Li, Xiaodong ; Wu, Pangjing ; Wang, Feng</creator><creatorcontrib>Ye, Jiaqi ; Li, Xiaodong ; Wu, Pangjing ; Wang, Feng</creatorcontrib><description>Most reinforcement learning algorithms are based on a key assumption that
Markov decision processes (MDPs) are stationary. However, non-stationary MDPs
with dynamic action space are omnipresent in real-world scenarios. Yet problems
of dynamic action space reinforcement learning have been studied by many
previous works, how to choose valuable actions from new and unseen actions to
improve learning efficiency remains unaddressed. To tackle this problem, we
propose an intelligent Action Pick-up (AP) algorithm to autonomously choose
valuable actions that are most likely to boost performance from a set of new
actions. In this paper, we first theoretically analyze and find that a prior
optimal policy plays an important role in action pick-up by providing useful
knowledge and experience. Then, we design two different AP methods:
frequency-based global method and state clustering-based local method, based on
the prior optimal policy. Finally, we evaluate the AP on two simulated but
challenging environments where action spaces vary over time. Experimental
results demonstrate that our proposed AP has advantages over baselines in
learning efficiency.</description><identifier>DOI: 10.48550/arxiv.2304.00873</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2023-04</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2304.00873$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2304.00873$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ye, Jiaqi</creatorcontrib><creatorcontrib>Li, Xiaodong</creatorcontrib><creatorcontrib>Wu, Pangjing</creatorcontrib><creatorcontrib>Wang, Feng</creatorcontrib><title>Action Pick-up in Dynamic Action Space Reinforcement Learning</title><description>Most reinforcement learning algorithms are based on a key assumption that
Markov decision processes (MDPs) are stationary. However, non-stationary MDPs
with dynamic action space are omnipresent in real-world scenarios. Yet problems
of dynamic action space reinforcement learning have been studied by many
previous works, how to choose valuable actions from new and unseen actions to
improve learning efficiency remains unaddressed. To tackle this problem, we
propose an intelligent Action Pick-up (AP) algorithm to autonomously choose
valuable actions that are most likely to boost performance from a set of new
actions. In this paper, we first theoretically analyze and find that a prior
optimal policy plays an important role in action pick-up by providing useful
knowledge and experience. Then, we design two different AP methods:
frequency-based global method and state clustering-based local method, based on
the prior optimal policy. Finally, we evaluate the AP on two simulated but
challenging environments where action spaces vary over time. Experimental
results demonstrate that our proposed AP has advantages over baselines in
learning efficiency.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj71uwjAURr10QLQPwFS_QMJ1_MvQAUELSJFatezRzbVTWSUmMrQqb4_4mc7wSUffYWwioFROa5hi_o9_ZSVBlQDOyhF7mdMx7hP_iPRT_A48Jr48Jewj8fvyNSAF_hli6vaZQh_SkdcBc4rp-5E9dLg7hKc7x2z79rpdrIv6fbVZzOsCjZUFefRAaEmjMdA5qYwgg1Z1rUCAtlWVnGlhqxa8sFZ4LaQhsDOhvHfOyDF7vmmv_5shxx7zqbl0NNcOeQaiUEGA</recordid><startdate>20230403</startdate><enddate>20230403</enddate><creator>Ye, Jiaqi</creator><creator>Li, Xiaodong</creator><creator>Wu, Pangjing</creator><creator>Wang, Feng</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230403</creationdate><title>Action Pick-up in Dynamic Action Space Reinforcement Learning</title><author>Ye, Jiaqi ; Li, Xiaodong ; Wu, Pangjing ; Wang, Feng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a673-cdad0ca7c5a660f83461c6a74fb1a00bb42395172b0d1771d5136c07914dd8863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ye, Jiaqi</creatorcontrib><creatorcontrib>Li, Xiaodong</creatorcontrib><creatorcontrib>Wu, Pangjing</creatorcontrib><creatorcontrib>Wang, Feng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ye, Jiaqi</au><au>Li, Xiaodong</au><au>Wu, Pangjing</au><au>Wang, Feng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Action Pick-up in Dynamic Action Space Reinforcement Learning</atitle><date>2023-04-03</date><risdate>2023</risdate><abstract>Most reinforcement learning algorithms are based on a key assumption that
Markov decision processes (MDPs) are stationary. However, non-stationary MDPs
with dynamic action space are omnipresent in real-world scenarios. Yet problems
of dynamic action space reinforcement learning have been studied by many
previous works, how to choose valuable actions from new and unseen actions to
improve learning efficiency remains unaddressed. To tackle this problem, we
propose an intelligent Action Pick-up (AP) algorithm to autonomously choose
valuable actions that are most likely to boost performance from a set of new
actions. In this paper, we first theoretically analyze and find that a prior
optimal policy plays an important role in action pick-up by providing useful
knowledge and experience. Then, we design two different AP methods:
frequency-based global method and state clustering-based local method, based on
the prior optimal policy. Finally, we evaluate the AP on two simulated but
challenging environments where action spaces vary over time. Experimental
results demonstrate that our proposed AP has advantages over baselines in
learning efficiency.</abstract><doi>10.48550/arxiv.2304.00873</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.2304.00873 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_2304_00873 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning |
title | Action Pick-up in Dynamic Action Space Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-10T17%3A54%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Action%20Pick-up%20in%20Dynamic%20Action%20Space%20Reinforcement%20Learning&rft.au=Ye,%20Jiaqi&rft.date=2023-04-03&rft_id=info:doi/10.48550/arxiv.2304.00873&rft_dat=%3Carxiv_GOX%3E2304_00873%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |