Safe Policy Improvement with an Estimated Baseline Policy

Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to contro...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Simão, Thiago D, Laroche, Romain, Combes, Rémi Tachet des
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Simão, Thiago D
Laroche, Romain
Combes, Rémi Tachet des
description Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to control the variance on the trained policy performance. However, in many real-world applications such as dialogue systems, pharmaceutical tests or crop management, data is collected under human supervision and the baseline remains unknown. In this paper, we apply SPIBB algorithms with a baseline estimate built from the data. We formally show safe policy improvement guarantees over the true baseline even without direct access to it. Our empirical experiments on finite and continuous states tasks support the theoretical findings. It shows little loss of performance in comparison with SPIBB when the baseline policy is given, and more importantly, drastically and significantly outperforms competing algorithms both in safe policy improvement, and in average performance.
doi_str_mv 10.48550/arxiv.1909.05236
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1909_05236</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1909_05236</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-a8403339a982e0bf7b5552a775d4ddab5d73b3f899909af1606fac43b1587463</originalsourceid><addsrcrecordid>eNo1j8tuwjAQRb3pAkE_gFX9AwlOxuPHkiLaIiFRie6jcWyrlvJASQTl70tpu7qbo6tzGFsWIpcGUaxo-ErnvLDC5gJLUDNmjxQDf--bVF_5rj0N_Tm0oZv4JU2fnDq-HafU0hQ8f6YxNKn7pxfsIVIzhse_nbPjy_Zj85btD6-7zXqfkdIqIyMFAFiypgzCRe0QsSSt0UvvyaHX4CAaa29SFAslVKRagivQaKlgzp5-X-_q1Wm4yQzX6iehuifAN11sQDk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Safe Policy Improvement with an Estimated Baseline Policy</title><source>arXiv.org</source><creator>Simão, Thiago D ; Laroche, Romain ; Combes, Rémi Tachet des</creator><creatorcontrib>Simão, Thiago D ; Laroche, Romain ; Combes, Rémi Tachet des</creatorcontrib><description>Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to control the variance on the trained policy performance. However, in many real-world applications such as dialogue systems, pharmaceutical tests or crop management, data is collected under human supervision and the baseline remains unknown. In this paper, we apply SPIBB algorithms with a baseline estimate built from the data. We formally show safe policy improvement guarantees over the true baseline even without direct access to it. Our empirical experiments on finite and continuous states tasks support the theoretical findings. It shows little loss of performance in comparison with SPIBB when the baseline policy is given, and more importantly, drastically and significantly outperforms competing algorithms both in safe policy improvement, and in average performance.</description><identifier>DOI: 10.48550/arxiv.1909.05236</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-09</creationdate><rights>http://creativecommons.org/licenses/by/4.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1909.05236$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1909.05236$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Simão, Thiago D</creatorcontrib><creatorcontrib>Laroche, Romain</creatorcontrib><creatorcontrib>Combes, Rémi Tachet des</creatorcontrib><title>Safe Policy Improvement with an Estimated Baseline Policy</title><description>Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to control the variance on the trained policy performance. However, in many real-world applications such as dialogue systems, pharmaceutical tests or crop management, data is collected under human supervision and the baseline remains unknown. In this paper, we apply SPIBB algorithms with a baseline estimate built from the data. We formally show safe policy improvement guarantees over the true baseline even without direct access to it. Our empirical experiments on finite and continuous states tasks support the theoretical findings. It shows little loss of performance in comparison with SPIBB when the baseline policy is given, and more importantly, drastically and significantly outperforms competing algorithms both in safe policy improvement, and in average performance.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNo1j8tuwjAQRb3pAkE_gFX9AwlOxuPHkiLaIiFRie6jcWyrlvJASQTl70tpu7qbo6tzGFsWIpcGUaxo-ErnvLDC5gJLUDNmjxQDf--bVF_5rj0N_Tm0oZv4JU2fnDq-HafU0hQ8f6YxNKn7pxfsIVIzhse_nbPjy_Zj85btD6-7zXqfkdIqIyMFAFiypgzCRe0QsSSt0UvvyaHX4CAaa29SFAslVKRagivQaKlgzp5-X-_q1Wm4yQzX6iehuifAN11sQDk</recordid><startdate>20190911</startdate><enddate>20190911</enddate><creator>Simão, Thiago D</creator><creator>Laroche, Romain</creator><creator>Combes, Rémi Tachet des</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190911</creationdate><title>Safe Policy Improvement with an Estimated Baseline Policy</title><author>Simão, Thiago D ; Laroche, Romain ; Combes, Rémi Tachet des</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-a8403339a982e0bf7b5552a775d4ddab5d73b3f899909af1606fac43b1587463</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Simão, Thiago D</creatorcontrib><creatorcontrib>Laroche, Romain</creatorcontrib><creatorcontrib>Combes, Rémi Tachet des</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Simão, Thiago D</au><au>Laroche, Romain</au><au>Combes, Rémi Tachet des</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Safe Policy Improvement with an Estimated Baseline Policy</atitle><date>2019-09-11</date><risdate>2019</risdate><abstract>Previous work has shown the unreliability of existing algorithms in the batch Reinforcement Learning setting, and proposed the theoretically-grounded Safe Policy Improvement with Baseline Bootstrapping (SPIBB) fix: reproduce the baseline policy in the uncertain state-action pairs, in order to control the variance on the trained policy performance. However, in many real-world applications such as dialogue systems, pharmaceutical tests or crop management, data is collected under human supervision and the baseline remains unknown. In this paper, we apply SPIBB algorithms with a baseline estimate built from the data. We formally show safe policy improvement guarantees over the true baseline even without direct access to it. Our empirical experiments on finite and continuous states tasks support the theoretical findings. It shows little loss of performance in comparison with SPIBB when the baseline policy is given, and more importantly, drastically and significantly outperforms competing algorithms both in safe policy improvement, and in average performance.</abstract><doi>10.48550/arxiv.1909.05236</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1909.05236
ispartof
issn
language eng
recordid cdi_arxiv_primary_1909_05236
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Statistics - Machine Learning
title Safe Policy Improvement with an Estimated Baseline Policy
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-25T01%3A16%3A46IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Safe%20Policy%20Improvement%20with%20an%20Estimated%20Baseline%20Policy&rft.au=Sim%C3%A3o,%20Thiago%20D&rft.date=2019-09-11&rft_id=info:doi/10.48550/arxiv.1909.05236&rft_dat=%3Carxiv_GOX%3E1909_05236%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true