Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning

What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? When the desired (test time) goal distribution is too distant to offer a useful learning signal, we argue that the agent should not pursue unobtainable goals. Instead, it should set its own intr...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Pitis, Silviu, Chan, Harris, Zhao, Stephen, Stadie, Bradly, Ba, Jimmy
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Pitis, Silviu
Chan, Harris
Zhao, Stephen
Stadie, Bradly
Ba, Jimmy
description What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? When the desired (test time) goal distribution is too distant to offer a useful learning signal, we argue that the agent should not pursue unobtainable goals. Instead, it should set its own intrinsic goals that maximize the entropy of the historical achieved goal distribution. We propose to optimize this objective by having the agent pursue past achieved goals in sparsely explored areas of the goal space, which focuses exploration on the frontier of the achievable goal set. We show that our strategy achieves an order of magnitude better sample efficiency than the prior state of the art on long-horizon multi-goal tasks including maze navigation and block stacking.
doi_str_mv 10.48550/arxiv.2007.02832
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2007_02832</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2007_02832</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-806f19137968a0312e2237848ddab50b3889c6bf73888fd30cfa51b7763834d93</originalsourceid><addsrcrecordid>eNotj8tqwzAURLXpoiT9gK6qH7Crhy1dL0Nwk4JDoc3eXNuSEdiSUZ3i9Ovrpl3NMAcGDiGPnKUZ5Dl7xri4r1QwplMmQIp78nHCxY2XkZZ-jmG60gM6T8tlGkLE2QVPbYi0Cr6nxxDd9zqcLsPskj7gQN-N8ytvzWj8TCuD0Tvfb8mdxeHTPPznhpxfyvP-mFRvh9f9rkpQaZEAU5YXXOpCATLJhRFCasig67DJWSMBilY1Vq8FbCdZazHnjdZKgsy6Qm7I09_tzaqeohsxXutfu_pmJ38AJThJ5g</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning</title><source>arXiv.org</source><creator>Pitis, Silviu ; Chan, Harris ; Zhao, Stephen ; Stadie, Bradly ; Ba, Jimmy</creator><creatorcontrib>Pitis, Silviu ; Chan, Harris ; Zhao, Stephen ; Stadie, Bradly ; Ba, Jimmy</creatorcontrib><description>What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? When the desired (test time) goal distribution is too distant to offer a useful learning signal, we argue that the agent should not pursue unobtainable goals. Instead, it should set its own intrinsic goals that maximize the entropy of the historical achieved goal distribution. We propose to optimize this objective by having the agent pursue past achieved goals in sparsely explored areas of the goal space, which focuses exploration on the frontier of the achievable goal set. We show that our strategy achieves an order of magnitude better sample efficiency than the prior state of the art on long-horizon multi-goal tasks including maze navigation and block stacking.</description><identifier>DOI: 10.48550/arxiv.2007.02832</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Computer Science - Robotics ; Statistics - Machine Learning</subject><creationdate>2020-07</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2007.02832$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2007.02832$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Pitis, Silviu</creatorcontrib><creatorcontrib>Chan, Harris</creatorcontrib><creatorcontrib>Zhao, Stephen</creatorcontrib><creatorcontrib>Stadie, Bradly</creatorcontrib><creatorcontrib>Ba, Jimmy</creatorcontrib><title>Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning</title><description>What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? When the desired (test time) goal distribution is too distant to offer a useful learning signal, we argue that the agent should not pursue unobtainable goals. Instead, it should set its own intrinsic goals that maximize the entropy of the historical achieved goal distribution. We propose to optimize this objective by having the agent pursue past achieved goals in sparsely explored areas of the goal space, which focuses exploration on the frontier of the achievable goal set. We show that our strategy achieves an order of magnitude better sample efficiency than the prior state of the art on long-horizon multi-goal tasks including maze navigation and block stacking.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Computer Science - Robotics</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tqwzAURLXpoiT9gK6qH7Crhy1dL0Nwk4JDoc3eXNuSEdiSUZ3i9Ovrpl3NMAcGDiGPnKUZ5Dl7xri4r1QwplMmQIp78nHCxY2XkZZ-jmG60gM6T8tlGkLE2QVPbYi0Cr6nxxDd9zqcLsPskj7gQN-N8ytvzWj8TCuD0Tvfb8mdxeHTPPznhpxfyvP-mFRvh9f9rkpQaZEAU5YXXOpCATLJhRFCasig67DJWSMBilY1Vq8FbCdZazHnjdZKgsy6Qm7I09_tzaqeohsxXutfu_pmJ38AJThJ5g</recordid><startdate>20200706</startdate><enddate>20200706</enddate><creator>Pitis, Silviu</creator><creator>Chan, Harris</creator><creator>Zhao, Stephen</creator><creator>Stadie, Bradly</creator><creator>Ba, Jimmy</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200706</creationdate><title>Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning</title><author>Pitis, Silviu ; Chan, Harris ; Zhao, Stephen ; Stadie, Bradly ; Ba, Jimmy</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-806f19137968a0312e2237848ddab50b3889c6bf73888fd30cfa51b7763834d93</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Computer Science - Robotics</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Pitis, Silviu</creatorcontrib><creatorcontrib>Chan, Harris</creatorcontrib><creatorcontrib>Zhao, Stephen</creatorcontrib><creatorcontrib>Stadie, Bradly</creatorcontrib><creatorcontrib>Ba, Jimmy</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Pitis, Silviu</au><au>Chan, Harris</au><au>Zhao, Stephen</au><au>Stadie, Bradly</au><au>Ba, Jimmy</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning</atitle><date>2020-07-06</date><risdate>2020</risdate><abstract>What goals should a multi-goal reinforcement learning agent pursue during training in long-horizon tasks? When the desired (test time) goal distribution is too distant to offer a useful learning signal, we argue that the agent should not pursue unobtainable goals. Instead, it should set its own intrinsic goals that maximize the entropy of the historical achieved goal distribution. We propose to optimize this objective by having the agent pursue past achieved goals in sparsely explored areas of the goal space, which focuses exploration on the frontier of the achievable goal set. We show that our strategy achieves an order of magnitude better sample efficiency than the prior state of the art on long-horizon multi-goal tasks including maze navigation and block stacking.</abstract><doi>10.48550/arxiv.2007.02832</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2007.02832
ispartof
issn
language eng
recordid cdi_arxiv_primary_2007_02832
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Computer Science - Robotics
Statistics - Machine Learning
title Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-06T03%3A04%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Maximum%20Entropy%20Gain%20Exploration%20for%20Long%20Horizon%20Multi-goal%20Reinforcement%20Learning&rft.au=Pitis,%20Silviu&rft.date=2020-07-06&rft_id=info:doi/10.48550/arxiv.2007.02832&rft_dat=%3Carxiv_GOX%3E2007_02832%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true