Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other ha...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2023-03 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Derman, Esther Men, Yevgeniy Geist, Matthieu Mannor, Shie |
description | Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs (\(\text{R}^2\) MDPs), i.e., MDPs with \(\textit{both}\) value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that \(\text{R}^2\) preserves its efficacy in continuous environments. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2786647244</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2786647244</sourcerecordid><originalsourceid>FETCH-proquest_journals_27866472443</originalsourceid><addsrcrecordid>eNqNjL0KwjAURoMgWLTvEHAu1KR_uGrFRZDSvaTtVVNLYnObFnx6MwiuTt_wnXMWxGOc74IsYmxFfMQuDEOWpCyOuUfqcpYN0ALuthdGvqGlF2GeeqJHaCRKrejV6AYQAfe0fADNBysn0YNyWg3jDKBooWuLo3IUFar91cToAhuyvIkewf_ummxPeXk4By-jBws4Vp22RrmrYmmWJFHKooj_R30Axq9GTA</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2786647244</pqid></control><display><type>article</type><title>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</title><source>Free E- Journals</source><creator>Derman, Esther ; Men, Yevgeniy ; Geist, Matthieu ; Mannor, Shie</creator><creatorcontrib>Derman, Esther ; Men, Yevgeniy ; Geist, Matthieu ; Mannor, Shie</creatorcontrib><description>Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs (\(\text{R}^2\) MDPs), i.e., MDPs with \(\textit{both}\) value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that \(\text{R}^2\) preserves its efficacy in continuous environments.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Complexity ; Iterative methods ; Learning ; Markov analysis ; Markov processes ; Operators (mathematics) ; Optimization ; Regularization ; Robustness (mathematics) ; System dynamics</subject><ispartof>arXiv.org, 2023-03</ispartof><rights>2023. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Derman, Esther</creatorcontrib><creatorcontrib>Men, Yevgeniy</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Mannor, Shie</creatorcontrib><title>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</title><title>arXiv.org</title><description>Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs (\(\text{R}^2\) MDPs), i.e., MDPs with \(\textit{both}\) value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that \(\text{R}^2\) preserves its efficacy in continuous environments.</description><subject>Complexity</subject><subject>Iterative methods</subject><subject>Learning</subject><subject>Markov analysis</subject><subject>Markov processes</subject><subject>Operators (mathematics)</subject><subject>Optimization</subject><subject>Regularization</subject><subject>Robustness (mathematics)</subject><subject>System dynamics</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNjL0KwjAURoMgWLTvEHAu1KR_uGrFRZDSvaTtVVNLYnObFnx6MwiuTt_wnXMWxGOc74IsYmxFfMQuDEOWpCyOuUfqcpYN0ALuthdGvqGlF2GeeqJHaCRKrejV6AYQAfe0fADNBysn0YNyWg3jDKBooWuLo3IUFar91cToAhuyvIkewf_ummxPeXk4By-jBws4Vp22RrmrYmmWJFHKooj_R30Axq9GTA</recordid><startdate>20230312</startdate><enddate>20230312</enddate><creator>Derman, Esther</creator><creator>Men, Yevgeniy</creator><creator>Geist, Matthieu</creator><creator>Mannor, Shie</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20230312</creationdate><title>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</title><author>Derman, Esther ; Men, Yevgeniy ; Geist, Matthieu ; Mannor, Shie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_27866472443</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Complexity</topic><topic>Iterative methods</topic><topic>Learning</topic><topic>Markov analysis</topic><topic>Markov processes</topic><topic>Operators (mathematics)</topic><topic>Optimization</topic><topic>Regularization</topic><topic>Robustness (mathematics)</topic><topic>System dynamics</topic><toplevel>online_resources</toplevel><creatorcontrib>Derman, Esther</creatorcontrib><creatorcontrib>Men, Yevgeniy</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Mannor, Shie</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Derman, Esther</au><au>Men, Yevgeniy</au><au>Geist, Matthieu</au><au>Mannor, Shie</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</atitle><jtitle>arXiv.org</jtitle><date>2023-03-12</date><risdate>2023</risdate><eissn>2331-8422</eissn><abstract>Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs (\(\text{R}^2\) MDPs), i.e., MDPs with \(\textit{both}\) value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that \(\text{R}^2\) preserves its efficacy in continuous environments.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2023-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2786647244 |
source | Free E- Journals |
subjects | Complexity Iterative methods Learning Markov analysis Markov processes Operators (mathematics) Optimization Regularization Robustness (mathematics) System dynamics |
title | Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T00%3A23%3A25IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Twice%20Regularized%20Markov%20Decision%20Processes:%20The%20Equivalence%20between%20Robustness%20and%20Regularization&rft.jtitle=arXiv.org&rft.au=Derman,%20Esther&rft.date=2023-03-12&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2786647244%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2786647244&rft_id=info:pmid/&rfr_iscdi=true |