Better Exploration with Optimistic Actor-Critic

NeurIPS 2019 Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Ciosek, Kamil, Vuong, Quan, Loftin, Robert, Hofmann, Katja
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Ciosek, Kamil
Vuong, Quan
Loftin, Robert
Hofmann, Katja
description NeurIPS 2019 Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor sample efficiency. We address this problem both theoretically and empirically. On the theoretical side, we identify two phenomena preventing efficient exploration in existing state-of-the-art algorithms such as Soft Actor Critic. First, combining a greedy actor update with a pessimistic estimate of the critic leads to the avoidance of actions that the agent does not know about, a phenomenon we call pessimistic underexploration. Second, current algorithms are directionally uninformed, sampling actions with equal probability in opposite directions from the current mean. This is wasteful, since we typically need actions taken along certain directions much more than others. To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function. This allows us to apply the principle of optimism in the face of uncertainty to perform directed exploration using the upper bound while still using the lower bound to avoid overestimation. We evaluate OAC in several challenging continuous control tasks, achieving state-of the art sample efficiency.
doi_str_mv 10.48550/arxiv.1910.12807
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1910_12807</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1910_12807</sourcerecordid><originalsourceid>FETCH-LOGICAL-a1157-c6b9d0f7ce138e93d384552da73e6f3a6b450c1c7e646fb6c9e91a740444fd773</originalsourceid><addsrcrecordid>eNotzrsKwjAABdAsDqJ-gJP5gWpiXs2opT5AcNG5pHlgoLUlBq1_b61Ol3uHywFgjtGSpoyhlQqdfy6x7Ae8TpEYg9XWxmgDzLu2aoKKvrnDl483eG6jr_0jeg03OjYhyYLvyxSMnKoedvbPCbju8kt2SE7n_THbnBKFMROJ5qU0yAltMUmtJIaklLG1UYJY7ojiJWVIYy0sp9yVXEsrsRIUUUqdEYJMwOL3O5CLNvhahXfxpRcDnXwAPX091g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Better Exploration with Optimistic Actor-Critic</title><source>arXiv.org</source><creator>Ciosek, Kamil ; Vuong, Quan ; Loftin, Robert ; Hofmann, Katja</creator><creatorcontrib>Ciosek, Kamil ; Vuong, Quan ; Loftin, Robert ; Hofmann, Katja</creatorcontrib><description>NeurIPS 2019 Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor sample efficiency. We address this problem both theoretically and empirically. On the theoretical side, we identify two phenomena preventing efficient exploration in existing state-of-the-art algorithms such as Soft Actor Critic. First, combining a greedy actor update with a pessimistic estimate of the critic leads to the avoidance of actions that the agent does not know about, a phenomenon we call pessimistic underexploration. Second, current algorithms are directionally uninformed, sampling actions with equal probability in opposite directions from the current mean. This is wasteful, since we typically need actions taken along certain directions much more than others. To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function. This allows us to apply the principle of optimism in the face of uncertainty to perform directed exploration using the upper bound while still using the lower bound to avoid overestimation. We evaluate OAC in several challenging continuous control tasks, achieving state-of the art sample efficiency.</description><identifier>DOI: 10.48550/arxiv.1910.12807</identifier><language>eng</language><subject>Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-a1157-c6b9d0f7ce138e93d384552da73e6f3a6b450c1c7e646fb6c9e91a740444fd773</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,778,883</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1910.12807$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.12807$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Ciosek, Kamil</creatorcontrib><creatorcontrib>Vuong, Quan</creatorcontrib><creatorcontrib>Loftin, Robert</creatorcontrib><creatorcontrib>Hofmann, Katja</creatorcontrib><title>Better Exploration with Optimistic Actor-Critic</title><description>NeurIPS 2019 Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor sample efficiency. We address this problem both theoretically and empirically. On the theoretical side, we identify two phenomena preventing efficient exploration in existing state-of-the-art algorithms such as Soft Actor Critic. First, combining a greedy actor update with a pessimistic estimate of the critic leads to the avoidance of actions that the agent does not know about, a phenomenon we call pessimistic underexploration. Second, current algorithms are directionally uninformed, sampling actions with equal probability in opposite directions from the current mean. This is wasteful, since we typically need actions taken along certain directions much more than others. To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function. This allows us to apply the principle of optimism in the face of uncertainty to perform directed exploration using the upper bound while still using the lower bound to avoid overestimation. We evaluate OAC in several challenging continuous control tasks, achieving state-of the art sample efficiency.</description><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAABdAsDqJ-gJP5gWpiXs2opT5AcNG5pHlgoLUlBq1_b61Ol3uHywFgjtGSpoyhlQqdfy6x7Ae8TpEYg9XWxmgDzLu2aoKKvrnDl483eG6jr_0jeg03OjYhyYLvyxSMnKoedvbPCbju8kt2SE7n_THbnBKFMROJ5qU0yAltMUmtJIaklLG1UYJY7ojiJWVIYy0sp9yVXEsrsRIUUUqdEYJMwOL3O5CLNvhahXfxpRcDnXwAPX091g</recordid><startdate>20191028</startdate><enddate>20191028</enddate><creator>Ciosek, Kamil</creator><creator>Vuong, Quan</creator><creator>Loftin, Robert</creator><creator>Hofmann, Katja</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20191028</creationdate><title>Better Exploration with Optimistic Actor-Critic</title><author>Ciosek, Kamil ; Vuong, Quan ; Loftin, Robert ; Hofmann, Katja</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a1157-c6b9d0f7ce138e93d384552da73e6f3a6b450c1c7e646fb6c9e91a740444fd773</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Ciosek, Kamil</creatorcontrib><creatorcontrib>Vuong, Quan</creatorcontrib><creatorcontrib>Loftin, Robert</creatorcontrib><creatorcontrib>Hofmann, Katja</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Ciosek, Kamil</au><au>Vuong, Quan</au><au>Loftin, Robert</au><au>Hofmann, Katja</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Better Exploration with Optimistic Actor-Critic</atitle><date>2019-10-28</date><risdate>2019</risdate><abstract>NeurIPS 2019 Actor-critic methods, a type of model-free Reinforcement Learning, have been successfully applied to challenging tasks in continuous control, often achieving state-of-the art performance. However, wide-scale adoption of these methods in real-world domains is made difficult by their poor sample efficiency. We address this problem both theoretically and empirically. On the theoretical side, we identify two phenomena preventing efficient exploration in existing state-of-the-art algorithms such as Soft Actor Critic. First, combining a greedy actor update with a pessimistic estimate of the critic leads to the avoidance of actions that the agent does not know about, a phenomenon we call pessimistic underexploration. Second, current algorithms are directionally uninformed, sampling actions with equal probability in opposite directions from the current mean. This is wasteful, since we typically need actions taken along certain directions much more than others. To address both of these phenomena, we introduce a new algorithm, Optimistic Actor Critic, which approximates a lower and upper confidence bound on the state-action value function. This allows us to apply the principle of optimism in the face of uncertainty to perform directed exploration using the upper bound while still using the lower bound to avoid overestimation. We evaluate OAC in several challenging continuous control tasks, achieving state-of the art sample efficiency.</abstract><doi>10.48550/arxiv.1910.12807</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1910.12807
ispartof
issn
language eng
recordid cdi_arxiv_primary_1910_12807
source arXiv.org
subjects Computer Science - Learning
Statistics - Machine Learning
title Better Exploration with Optimistic Actor-Critic
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-15T21%3A12%3A56IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Better%20Exploration%20with%20Optimistic%20Actor-Critic&rft.au=Ciosek,%20Kamil&rft.date=2019-10-28&rft_id=info:doi/10.48550/arxiv.1910.12807&rft_dat=%3Carxiv_GOX%3E1910_12807%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true