Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization

Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other ha...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Derman, Esther, Men, Yevgeniy, Geist, Matthieu, Mannor, Shie
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Derman, Esther
Men, Yevgeniy
Geist, Matthieu
Mannor, Shie
description Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs ($\text{R}^2$ MDPs), i.e., MDPs with $\textit{both}$ value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that $\text{R}^2$ preserves its efficacy in continuous environments.
doi_str_mv 10.48550/arxiv.2303.06654
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2303_06654</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2303_06654</sourcerecordid><originalsourceid>FETCH-LOGICAL-a674-edc2be106c5eed4256ca457aeae5551c415013da8421318848b3cee35a5549c13</originalsourceid><addsrcrecordid>eNpFj71OwzAURr0woMIDMNUvkGDHvqlhQ6X8SEVUVfboxr6ARXDATlLg6RsKEtO3nO9Ih7EzKXJtAMQ5xk8_5oUSKhdlCfqYNdXOW-Jbeh5ajP6bHH_A-NqN_JqsT74LfBM7SylRuuTVC_HVx-BHbClMt4b6HVHg264ZUh8mimNw_zbsJ8EJO3rCNtHp385YdbOqlnfZ-vH2fnm1zrBc6IycLRqSorRA5HQBpUUNCyQkAJBWSxBSOTS6kEoao02jLJECBNAXVqoZm_9qD5H1e_RvGL_qn9j6EKv2IPtQ0A</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</title><source>arXiv.org</source><creator>Derman, Esther ; Men, Yevgeniy ; Geist, Matthieu ; Mannor, Shie</creator><creatorcontrib>Derman, Esther ; Men, Yevgeniy ; Geist, Matthieu ; Mannor, Shie</creatorcontrib><description>Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs ($\text{R}^2$ MDPs), i.e., MDPs with $\textit{both}$ value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that $\text{R}^2$ preserves its efficacy in continuous environments.</description><identifier>DOI: 10.48550/arxiv.2303.06654</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2023-03</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2303.06654$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2303.06654$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Derman, Esther</creatorcontrib><creatorcontrib>Men, Yevgeniy</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Mannor, Shie</creatorcontrib><title>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</title><description>Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs ($\text{R}^2$ MDPs), i.e., MDPs with $\textit{both}$ value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that $\text{R}^2$ preserves its efficacy in continuous environments.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNpFj71OwzAURr0woMIDMNUvkGDHvqlhQ6X8SEVUVfboxr6ARXDATlLg6RsKEtO3nO9Ih7EzKXJtAMQ5xk8_5oUSKhdlCfqYNdXOW-Jbeh5ajP6bHH_A-NqN_JqsT74LfBM7SylRuuTVC_HVx-BHbClMt4b6HVHg264ZUh8mimNw_zbsJ8EJO3rCNtHp385YdbOqlnfZ-vH2fnm1zrBc6IycLRqSorRA5HQBpUUNCyQkAJBWSxBSOTS6kEoao02jLJECBNAXVqoZm_9qD5H1e_RvGL_qn9j6EKv2IPtQ0A</recordid><startdate>20230312</startdate><enddate>20230312</enddate><creator>Derman, Esther</creator><creator>Men, Yevgeniy</creator><creator>Geist, Matthieu</creator><creator>Mannor, Shie</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20230312</creationdate><title>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</title><author>Derman, Esther ; Men, Yevgeniy ; Geist, Matthieu ; Mannor, Shie</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a674-edc2be106c5eed4256ca457aeae5551c415013da8421318848b3cee35a5549c13</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Derman, Esther</creatorcontrib><creatorcontrib>Men, Yevgeniy</creatorcontrib><creatorcontrib>Geist, Matthieu</creatorcontrib><creatorcontrib>Mannor, Shie</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Derman, Esther</au><au>Men, Yevgeniy</au><au>Geist, Matthieu</au><au>Mannor, Shie</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization</atitle><date>2023-03-12</date><risdate>2023</risdate><abstract>Robust Markov decision processes (MDPs) aim to handle changing or partially known system dynamics. To solve them, one typically resorts to robust optimization methods. However, this significantly increases computational complexity and limits scalability in both learning and planning. On the other hand, regularized MDPs show more stability in policy learning without impairing time complexity. Yet, they generally do not encompass uncertainty in the model dynamics. In this work, we aim to learn robust MDPs using regularization. We first show that regularized MDPs are a particular instance of robust MDPs with uncertain reward. We thus establish that policy iteration on reward-robust MDPs can have the same time complexity as on regularized MDPs. We further extend this relationship to MDPs with uncertain transitions: this leads to a regularization term with an additional dependence on the value function. We then generalize regularized MDPs to twice regularized MDPs ($\text{R}^2$ MDPs), i.e., MDPs with $\textit{both}$ value and policy regularization. The corresponding Bellman operators enable us to derive planning and learning schemes with convergence and generalization guarantees, thus reducing robustness to regularization. We numerically show this two-fold advantage on tabular and physical domains, highlighting the fact that $\text{R}^2$ preserves its efficacy in continuous environments.</abstract><doi>10.48550/arxiv.2303.06654</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2303.06654
ispartof
issn
language eng
recordid cdi_arxiv_primary_2303_06654
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Twice Regularized Markov Decision Processes: The Equivalence between Robustness and Regularization
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-14T02%3A43%3A35IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Twice%20Regularized%20Markov%20Decision%20Processes:%20The%20Equivalence%20between%20Robustness%20and%20Regularization&rft.au=Derman,%20Esther&rft.date=2023-03-12&rft_id=info:doi/10.48550/arxiv.2303.06654&rft_dat=%3Carxiv_GOX%3E2303_06654%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true