Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation
This paper proposes a formal approach to online learning and planning for agents operating in a priori unknown, time-varying environments. The proposed method computes the maximally likely model of the environment, given the observations about the environment made by an agent earlier in the system r...
Gespeichert in:
Veröffentlicht in: | Journal of machine learning research 2021, Vol.22, p.1-40 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 40 |
---|---|
container_issue | |
container_start_page | 1 |
container_title | Journal of machine learning research |
container_volume | 22 |
creator | Ornik, Melkior Topcu, Ufuk |
description | This paper proposes a formal approach to online learning and planning for agents operating in a priori unknown, time-varying environments. The proposed method computes the maximally likely model of the environment, given the observations about the environment made by an agent earlier in the system run and assuming knowledge of a bound on the maximal rate of change of system dynamics. Such an approach generalizes the estimation method commonly used in learning algorithms for unknown Markov decision processes with time-invariant transition probabilities, but is also able to quickly and correctly identify the system dynamics following a change. Based on the proposed method, we generalize the exploration bonuses used in learning for time-invariant Markov decision processes by introducing a notion of uncertainty in a learned time-varying model, and develop a control policy for time-varying Markov decision processes based on the exploitation and exploration trade-off. We demonstrate the proposed methods on four numerical examples: a patrolling task with a change in system dynamics, a two-state MDP with periodically changing outcomes of actions, a wind flow estimation task, and a multi-armed bandit problem with periodically changing probabilities of different rewards. |
format | Article |
fullrecord | <record><control><sourceid>proquest_pubme</sourceid><recordid>TN_cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8739185</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2618513578</sourcerecordid><originalsourceid>FETCH-LOGICAL-p266t-ab2b9c90bc8994fbb72e4bacef4abfc0bcf9c9f1f23ffd196e983efd67e65b383</originalsourceid><addsrcrecordid>eNpVUMtOwzAQtBCIlsIvoBy5RErs2IkvSKiUhxREDy1Xy07WrSGxi50g-vekpSA47ezMamY1R2icUkLinOPieI9xnGWEjtBZCK9JkuYUs1M0IjRJMM3oGD2XIL01dhVJW0fzRtr9op2PFqaF-EX67Y54up2HaBn2UH6atm-j0rxBY9bO1dEsdKaVnXH2HJ1o2QS4OMwJWt7NFtOHuHy-f5zelPEGM9bFUmHFK56oquA800rlGDIlK9CZVLoaeD3IOtWYaF2nnAEvCOia5cCoIgWZoOtv302vWqgrsJ2Xjdj44Q-_FU4a8V-xZi1W7kMUOeFpQQeDq4OBd-89hE60JlTQDA2A64PAbLhKCc13WZd_s35DflokX4yDcg4</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2618513578</pqid></control><display><type>article</type><title>Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation</title><source>ACM Digital Library Complete</source><source>EZB-FREE-00999 freely available EZB journals</source><creator>Ornik, Melkior ; Topcu, Ufuk</creator><creatorcontrib>Ornik, Melkior ; Topcu, Ufuk</creatorcontrib><description>This paper proposes a formal approach to online learning and planning for agents operating in a priori unknown, time-varying environments. The proposed method computes the maximally likely model of the environment, given the observations about the environment made by an agent earlier in the system run and assuming knowledge of a bound on the maximal rate of change of system dynamics. Such an approach generalizes the estimation method commonly used in learning algorithms for unknown Markov decision processes with time-invariant transition probabilities, but is also able to quickly and correctly identify the system dynamics following a change. Based on the proposed method, we generalize the exploration bonuses used in learning for time-invariant Markov decision processes by introducing a notion of uncertainty in a learned time-varying model, and develop a control policy for time-varying Markov decision processes based on the exploitation and exploration trade-off. We demonstrate the proposed methods on four numerical examples: a patrolling task with a change in system dynamics, a two-state MDP with periodically changing outcomes of actions, a wind flow estimation task, and a multi-armed bandit problem with periodically changing probabilities of different rewards.</description><identifier>ISSN: 1532-4435</identifier><identifier>EISSN: 1533-7928</identifier><identifier>PMID: 35002545</identifier><language>eng</language><publisher>United States</publisher><ispartof>Journal of machine learning research, 2021, Vol.22, p.1-40</ispartof><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>230,314,778,782,883,4012</link.rule.ids><backlink>$$Uhttps://www.ncbi.nlm.nih.gov/pubmed/35002545$$D View this record in MEDLINE/PubMed$$Hfree_for_read</backlink></links><search><creatorcontrib>Ornik, Melkior</creatorcontrib><creatorcontrib>Topcu, Ufuk</creatorcontrib><title>Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation</title><title>Journal of machine learning research</title><addtitle>J Mach Learn Res</addtitle><description>This paper proposes a formal approach to online learning and planning for agents operating in a priori unknown, time-varying environments. The proposed method computes the maximally likely model of the environment, given the observations about the environment made by an agent earlier in the system run and assuming knowledge of a bound on the maximal rate of change of system dynamics. Such an approach generalizes the estimation method commonly used in learning algorithms for unknown Markov decision processes with time-invariant transition probabilities, but is also able to quickly and correctly identify the system dynamics following a change. Based on the proposed method, we generalize the exploration bonuses used in learning for time-invariant Markov decision processes by introducing a notion of uncertainty in a learned time-varying model, and develop a control policy for time-varying Markov decision processes based on the exploitation and exploration trade-off. We demonstrate the proposed methods on four numerical examples: a patrolling task with a change in system dynamics, a two-state MDP with periodically changing outcomes of actions, a wind flow estimation task, and a multi-armed bandit problem with periodically changing probabilities of different rewards.</description><issn>1532-4435</issn><issn>1533-7928</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNpVUMtOwzAQtBCIlsIvoBy5RErs2IkvSKiUhxREDy1Xy07WrSGxi50g-vekpSA47ezMamY1R2icUkLinOPieI9xnGWEjtBZCK9JkuYUs1M0IjRJMM3oGD2XIL01dhVJW0fzRtr9op2PFqaF-EX67Y54up2HaBn2UH6atm-j0rxBY9bO1dEsdKaVnXH2HJ1o2QS4OMwJWt7NFtOHuHy-f5zelPEGM9bFUmHFK56oquA800rlGDIlK9CZVLoaeD3IOtWYaF2nnAEvCOia5cCoIgWZoOtv302vWqgrsJ2Xjdj44Q-_FU4a8V-xZi1W7kMUOeFpQQeDq4OBd-89hE60JlTQDA2A64PAbLhKCc13WZd_s35DflokX4yDcg4</recordid><startdate>2021</startdate><enddate>2021</enddate><creator>Ornik, Melkior</creator><creator>Topcu, Ufuk</creator><scope>NPM</scope><scope>7X8</scope><scope>5PM</scope></search><sort><creationdate>2021</creationdate><title>Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation</title><author>Ornik, Melkior ; Topcu, Ufuk</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-p266t-ab2b9c90bc8994fbb72e4bacef4abfc0bcf9c9f1f23ffd196e983efd67e65b383</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ornik, Melkior</creatorcontrib><creatorcontrib>Topcu, Ufuk</creatorcontrib><collection>PubMed</collection><collection>MEDLINE - Academic</collection><collection>PubMed Central (Full Participant titles)</collection><jtitle>Journal of machine learning research</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ornik, Melkior</au><au>Topcu, Ufuk</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation</atitle><jtitle>Journal of machine learning research</jtitle><addtitle>J Mach Learn Res</addtitle><date>2021</date><risdate>2021</risdate><volume>22</volume><spage>1</spage><epage>40</epage><pages>1-40</pages><issn>1532-4435</issn><eissn>1533-7928</eissn><abstract>This paper proposes a formal approach to online learning and planning for agents operating in a priori unknown, time-varying environments. The proposed method computes the maximally likely model of the environment, given the observations about the environment made by an agent earlier in the system run and assuming knowledge of a bound on the maximal rate of change of system dynamics. Such an approach generalizes the estimation method commonly used in learning algorithms for unknown Markov decision processes with time-invariant transition probabilities, but is also able to quickly and correctly identify the system dynamics following a change. Based on the proposed method, we generalize the exploration bonuses used in learning for time-invariant Markov decision processes by introducing a notion of uncertainty in a learned time-varying model, and develop a control policy for time-varying Markov decision processes based on the exploitation and exploration trade-off. We demonstrate the proposed methods on four numerical examples: a patrolling task with a change in system dynamics, a two-state MDP with periodically changing outcomes of actions, a wind flow estimation task, and a multi-armed bandit problem with periodically changing probabilities of different rewards.</abstract><cop>United States</cop><pmid>35002545</pmid><tpages>40</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1532-4435 |
ispartof | Journal of machine learning research, 2021, Vol.22, p.1-40 |
issn | 1532-4435 1533-7928 |
language | eng |
recordid | cdi_pubmedcentral_primary_oai_pubmedcentral_nih_gov_8739185 |
source | ACM Digital Library Complete; EZB-FREE-00999 freely available EZB journals |
title | Learning and Planning for Time-Varying MDPs Using Maximum Likelihood Estimation |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T04%3A00%3A09IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_pubme&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Learning%20and%20Planning%20for%20Time-Varying%20MDPs%20Using%20Maximum%20Likelihood%20Estimation&rft.jtitle=Journal%20of%20machine%20learning%20research&rft.au=Ornik,%20Melkior&rft.date=2021&rft.volume=22&rft.spage=1&rft.epage=40&rft.pages=1-40&rft.issn=1532-4435&rft.eissn=1533-7928&rft_id=info:doi/&rft_dat=%3Cproquest_pubme%3E2618513578%3C/proquest_pubme%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2618513578&rft_id=info:pmid/35002545&rfr_iscdi=true |