Conservative Optimistic Policy Optimization via Multiple Importance Sampling
Reinforcement Learning (RL) has been able to solve hard problems such as playing Atari games or solving the game of Go, with a unified approach. Yet modern deep RL approaches are still not widely used in real-world applications. One reason could be the lack of guarantees on the performance of the in...
Gespeichert in:
Veröffentlicht in: | arXiv.org 2021-03 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | arXiv.org |
container_volume | |
creator | Azize, Achraf Othman Gaizi |
description | Reinforcement Learning (RL) has been able to solve hard problems such as playing Atari games or solving the game of Go, with a unified approach. Yet modern deep RL approaches are still not widely used in real-world applications. One reason could be the lack of guarantees on the performance of the intermediate executed policies, compared to an existing (already working) baseline policy. In this paper, we propose an online model-free algorithm that solves conservative exploration in the policy optimization problem. We show that the regret of the proposed approach is bounded by \(\tilde{\mathcal{O}}(\sqrt{T})\) for both discrete and continuous parameter spaces. |
format | Article |
fullrecord | <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2498814286</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2498814286</sourcerecordid><originalsourceid>FETCH-proquest_journals_24988142863</originalsourceid><addsrcrecordid>eNqNitEKgjAUQEcQJOU_DHoW9E5tPUtRUBTkuwxZcWVua5tCfX0--AE9HTjnLEgEjGUJzwFWJPa-S9MUyh0UBYvIpTLaSzeKgKOkNxuwRx-wpXejsP3M5jtlo-mIgl4HFdAqSc-9NS4I3Ur6EL1VqF8bsnwK5WU8c022x0NdnRLrzHuQPjSdGZyeUgP5nvMsB16y_64f1mM-hw</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2498814286</pqid></control><display><type>article</type><title>Conservative Optimistic Policy Optimization via Multiple Importance Sampling</title><source>Freely Accessible Journals</source><creator>Azize, Achraf ; Othman Gaizi</creator><creatorcontrib>Azize, Achraf ; Othman Gaizi</creatorcontrib><description>Reinforcement Learning (RL) has been able to solve hard problems such as playing Atari games or solving the game of Go, with a unified approach. Yet modern deep RL approaches are still not widely used in real-world applications. One reason could be the lack of guarantees on the performance of the intermediate executed policies, compared to an existing (already working) baseline policy. In this paper, we propose an online model-free algorithm that solves conservative exploration in the policy optimization problem. We show that the regret of the proposed approach is bounded by \(\tilde{\mathcal{O}}(\sqrt{T})\) for both discrete and continuous parameter spaces.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Algorithms ; Importance sampling ; Optimization</subject><ispartof>arXiv.org, 2021-03</ispartof><rights>2021. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Azize, Achraf</creatorcontrib><creatorcontrib>Othman Gaizi</creatorcontrib><title>Conservative Optimistic Policy Optimization via Multiple Importance Sampling</title><title>arXiv.org</title><description>Reinforcement Learning (RL) has been able to solve hard problems such as playing Atari games or solving the game of Go, with a unified approach. Yet modern deep RL approaches are still not widely used in real-world applications. One reason could be the lack of guarantees on the performance of the intermediate executed policies, compared to an existing (already working) baseline policy. In this paper, we propose an online model-free algorithm that solves conservative exploration in the policy optimization problem. We show that the regret of the proposed approach is bounded by \(\tilde{\mathcal{O}}(\sqrt{T})\) for both discrete and continuous parameter spaces.</description><subject>Algorithms</subject><subject>Importance sampling</subject><subject>Optimization</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNitEKgjAUQEcQJOU_DHoW9E5tPUtRUBTkuwxZcWVua5tCfX0--AE9HTjnLEgEjGUJzwFWJPa-S9MUyh0UBYvIpTLaSzeKgKOkNxuwRx-wpXejsP3M5jtlo-mIgl4HFdAqSc-9NS4I3Ur6EL1VqF8bsnwK5WU8c022x0NdnRLrzHuQPjSdGZyeUgP5nvMsB16y_64f1mM-hw</recordid><startdate>20210304</startdate><enddate>20210304</enddate><creator>Azize, Achraf</creator><creator>Othman Gaizi</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20210304</creationdate><title>Conservative Optimistic Policy Optimization via Multiple Importance Sampling</title><author>Azize, Achraf ; Othman Gaizi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_24988142863</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Algorithms</topic><topic>Importance sampling</topic><topic>Optimization</topic><toplevel>online_resources</toplevel><creatorcontrib>Azize, Achraf</creatorcontrib><creatorcontrib>Othman Gaizi</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science & Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Publicly Available Content Database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Azize, Achraf</au><au>Othman Gaizi</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Conservative Optimistic Policy Optimization via Multiple Importance Sampling</atitle><jtitle>arXiv.org</jtitle><date>2021-03-04</date><risdate>2021</risdate><eissn>2331-8422</eissn><abstract>Reinforcement Learning (RL) has been able to solve hard problems such as playing Atari games or solving the game of Go, with a unified approach. Yet modern deep RL approaches are still not widely used in real-world applications. One reason could be the lack of guarantees on the performance of the intermediate executed policies, compared to an existing (already working) baseline policy. In this paper, we propose an online model-free algorithm that solves conservative exploration in the policy optimization problem. We show that the regret of the proposed approach is bounded by \(\tilde{\mathcal{O}}(\sqrt{T})\) for both discrete and continuous parameter spaces.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | EISSN: 2331-8422 |
ispartof | arXiv.org, 2021-03 |
issn | 2331-8422 |
language | eng |
recordid | cdi_proquest_journals_2498814286 |
source | Freely Accessible Journals |
subjects | Algorithms Importance sampling Optimization |
title | Conservative Optimistic Policy Optimization via Multiple Importance Sampling |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-21T03%3A35%3A31IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Conservative%20Optimistic%20Policy%20Optimization%20via%20Multiple%20Importance%20Sampling&rft.jtitle=arXiv.org&rft.au=Azize,%20Achraf&rft.date=2021-03-04&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2498814286%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2498814286&rft_id=info:pmid/&rfr_iscdi=true |