Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards

Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or a...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Bertram, Joshua R, Yang, Xuxi, Wei, Peng
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Bertram, Joshua R
Yang, Xuxi
Wei, Peng
description Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or action space. However, most of these classical solution approaches and their approximation techniques still take much computation time to converge and usually must be re-computed if the reward function is changed. This paper introduces a novel alternative approach for exactly and efficiently solving deterministic, continuous MDPs with sparse reward sources. When the environment is such that the "distance" between states can be determined in constant time, e.g. grid world, our algorithm offers $O( |R|^2 \times |A|^2 \times |S|)$, where $|R|$ is the number of reward sources, $|A|$ is the number of actions, and $|S|$ is the number of states. Memory complexity for the algorithm is $O( |S| + |R| \times |A|)$. This new approach opens new avenues for boosting computational performance for certain classes of MDPs and is of tremendous value for MDP applications such as robotics and unmanned systems. This paper describes the algorithm and presents numerical experiment results to demonstrate its powerful computational performance. We also provide rigorous mathematical description of the approach.
doi_str_mv 10.48550/arxiv.1805.02785
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1805_02785</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1805_02785</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-8b5332d2ec9a2a4a86e8037dbc948d4cadfaa0037c436fe83bb1d49ef7816f853</originalsourceid><addsrcrecordid>eNotz7FOwzAUhWEvDKjwAEz4BRKc2E5uxqptAKmoiHaPbuxr1VKaVLah5e2B0ulI_3Ckj7GHQuQKtBZPGM7-Ky9A6FyUNehb1rYYE9-Mgx-Jr85oEt9Ow2fy0xi5mwJfUqJw8KOPyRv-tnyP_OTTnm-PGCLxDzphsPGO3TgcIt1fd8Z27Wq3eMnWm-fXxXydYVXrDHotZWlLMg2WqBAqAiFr25tGgVUGrUMUv8UoWTkC2feFVQ25GorKgZYz9vh_e4F0x-APGL67P1B3Ackf6pJGOQ</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards</title><source>arXiv.org</source><creator>Bertram, Joshua R ; Yang, Xuxi ; Wei, Peng</creator><creatorcontrib>Bertram, Joshua R ; Yang, Xuxi ; Wei, Peng</creatorcontrib><description>Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or action space. However, most of these classical solution approaches and their approximation techniques still take much computation time to converge and usually must be re-computed if the reward function is changed. This paper introduces a novel alternative approach for exactly and efficiently solving deterministic, continuous MDPs with sparse reward sources. When the environment is such that the "distance" between states can be determined in constant time, e.g. grid world, our algorithm offers $O( |R|^2 \times |A|^2 \times |S|)$, where $|R|$ is the number of reward sources, $|A|$ is the number of actions, and $|S|$ is the number of states. Memory complexity for the algorithm is $O( |S| + |R| \times |A|)$. This new approach opens new avenues for boosting computational performance for certain classes of MDPs and is of tremendous value for MDP applications such as robotics and unmanned systems. This paper describes the algorithm and presents numerical experiment results to demonstrate its powerful computational performance. We also provide rigorous mathematical description of the approach.</description><identifier>DOI: 10.48550/arxiv.1805.02785</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2018-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1805.02785$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1805.02785$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Bertram, Joshua R</creatorcontrib><creatorcontrib>Yang, Xuxi</creatorcontrib><creatorcontrib>Wei, Peng</creatorcontrib><title>Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards</title><description>Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or action space. However, most of these classical solution approaches and their approximation techniques still take much computation time to converge and usually must be re-computed if the reward function is changed. This paper introduces a novel alternative approach for exactly and efficiently solving deterministic, continuous MDPs with sparse reward sources. When the environment is such that the "distance" between states can be determined in constant time, e.g. grid world, our algorithm offers $O( |R|^2 \times |A|^2 \times |S|)$, where $|R|$ is the number of reward sources, $|A|$ is the number of actions, and $|S|$ is the number of states. Memory complexity for the algorithm is $O( |S| + |R| \times |A|)$. This new approach opens new avenues for boosting computational performance for certain classes of MDPs and is of tremendous value for MDP applications such as robotics and unmanned systems. This paper describes the algorithm and presents numerical experiment results to demonstrate its powerful computational performance. We also provide rigorous mathematical description of the approach.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2018</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz7FOwzAUhWEvDKjwAEz4BRKc2E5uxqptAKmoiHaPbuxr1VKaVLah5e2B0ulI_3Ckj7GHQuQKtBZPGM7-Ky9A6FyUNehb1rYYE9-Mgx-Jr85oEt9Ow2fy0xi5mwJfUqJw8KOPyRv-tnyP_OTTnm-PGCLxDzphsPGO3TgcIt1fd8Z27Wq3eMnWm-fXxXydYVXrDHotZWlLMg2WqBAqAiFr25tGgVUGrUMUv8UoWTkC2feFVQ25GorKgZYz9vh_e4F0x-APGL67P1B3Ackf6pJGOQ</recordid><startdate>20180507</startdate><enddate>20180507</enddate><creator>Bertram, Joshua R</creator><creator>Yang, Xuxi</creator><creator>Wei, Peng</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20180507</creationdate><title>Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards</title><author>Bertram, Joshua R ; Yang, Xuxi ; Wei, Peng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-8b5332d2ec9a2a4a86e8037dbc948d4cadfaa0037c436fe83bb1d49ef7816f853</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2018</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Bertram, Joshua R</creatorcontrib><creatorcontrib>Yang, Xuxi</creatorcontrib><creatorcontrib>Wei, Peng</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Bertram, Joshua R</au><au>Yang, Xuxi</au><au>Wei, Peng</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards</atitle><date>2018-05-07</date><risdate>2018</risdate><abstract>Markov Decision Processes (MDPs) are a mathematical framework for modeling sequential decision making under uncertainty. The classical approaches for solving MDPs are well known and have been widely studied, some of which rely on approximation techniques to solve MDPs with large state space and/or action space. However, most of these classical solution approaches and their approximation techniques still take much computation time to converge and usually must be re-computed if the reward function is changed. This paper introduces a novel alternative approach for exactly and efficiently solving deterministic, continuous MDPs with sparse reward sources. When the environment is such that the "distance" between states can be determined in constant time, e.g. grid world, our algorithm offers $O( |R|^2 \times |A|^2 \times |S|)$, where $|R|$ is the number of reward sources, $|A|$ is the number of actions, and $|S|$ is the number of states. Memory complexity for the algorithm is $O( |S| + |R| \times |A|)$. This new approach opens new avenues for boosting computational performance for certain classes of MDPs and is of tremendous value for MDP applications such as robotics and unmanned systems. This paper describes the algorithm and presents numerical experiment results to demonstrate its powerful computational performance. We also provide rigorous mathematical description of the approach.</abstract><doi>10.48550/arxiv.1805.02785</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1805.02785
ispartof
issn
language eng
recordid cdi_arxiv_primary_1805_02785
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Statistics - Machine Learning
title Fast Online Exact Solutions for Deterministic MDPs with Sparse Rewards
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T18%3A44%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Fast%20Online%20Exact%20Solutions%20for%20Deterministic%20MDPs%20with%20Sparse%20Rewards&rft.au=Bertram,%20Joshua%20R&rft.date=2018-05-07&rft_id=info:doi/10.48550/arxiv.1805.02785&rft_dat=%3Carxiv_GOX%3E1805_02785%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true