Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning

This work considers two distinct settings: imitation learning and goal-conditioned reinforcement learning. In either case, effective solutions require the agent to reliably reach a specified state (a goal), or set of states (a demonstration). Drawing a connection between probabilistic long-term dyna...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:arXiv.org 2020-02
Hauptverfasser: Schroecker, Yannick, Isbell, Charles
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Schroecker, Yannick
Isbell, Charles
description This work considers two distinct settings: imitation learning and goal-conditioned reinforcement learning. In either case, effective solutions require the agent to reliably reach a specified state (a goal), or set of states (a demonstration). Drawing a connection between probabilistic long-term dynamics and the desired value function, this work introduces an approach which utilizes recent advances in density estimation to effectively learn to reach a given state. As our first contribution, we use this approach for goal-conditioned reinforcement learning and show that it is both efficient and does not suffer from hindsight bias in stochastic domains. As our second contribution, we extend the approach to imitation learning and show that it achieves state-of-the art demonstration sample-efficiency on standard benchmark tasks.
format Article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2357103476</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2357103476</sourcerecordid><originalsourceid>FETCH-proquest_journals_23571034763</originalsourceid><addsrcrecordid>eNqNyssKwjAQheEgCBbtOwRcF9qkF_e1XsCVqNsS7FSmpBNNUsG3V1Fcuzoc_m_EAiFlEi1SISYsdK6L41jkhcgyGTA4Et7BOqX5SekB-BLIoX_wynnslUdDvDWWb3v0n7cDZQnpwhU1fG2UjkpDDb4bNHwPSC9_hh7I_-yMjVulHYTfnbL5qjqUm-hqzW0A5-vODJZeqRYyK5JYpkUu_1NPpHBHIg</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2357103476</pqid></control><display><type>article</type><title>Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning</title><source>Free E- Journals</source><creator>Schroecker, Yannick ; Isbell, Charles</creator><creatorcontrib>Schroecker, Yannick ; Isbell, Charles</creatorcontrib><description>This work considers two distinct settings: imitation learning and goal-conditioned reinforcement learning. In either case, effective solutions require the agent to reliably reach a specified state (a goal), or set of states (a demonstration). Drawing a connection between probabilistic long-term dynamics and the desired value function, this work introduces an approach which utilizes recent advances in density estimation to effectively learn to reach a given state. As our first contribution, we use this approach for goal-conditioned reinforcement learning and show that it is both efficient and does not suffer from hindsight bias in stochastic domains. As our second contribution, we extend the approach to imitation learning and show that it achieves state-of-the art demonstration sample-efficiency on standard benchmark tasks.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Density ; Learning</subject><ispartof>arXiv.org, 2020-02</ispartof><rights>2020. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>780,784</link.rule.ids></links><search><creatorcontrib>Schroecker, Yannick</creatorcontrib><creatorcontrib>Isbell, Charles</creatorcontrib><title>Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning</title><title>arXiv.org</title><description>This work considers two distinct settings: imitation learning and goal-conditioned reinforcement learning. In either case, effective solutions require the agent to reliably reach a specified state (a goal), or set of states (a demonstration). Drawing a connection between probabilistic long-term dynamics and the desired value function, this work introduces an approach which utilizes recent advances in density estimation to effectively learn to reach a given state. As our first contribution, we use this approach for goal-conditioned reinforcement learning and show that it is both efficient and does not suffer from hindsight bias in stochastic domains. As our second contribution, we extend the approach to imitation learning and show that it achieves state-of-the art demonstration sample-efficiency on standard benchmark tasks.</description><subject>Density</subject><subject>Learning</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>ABUWG</sourceid><sourceid>AFKRA</sourceid><sourceid>AZQEC</sourceid><sourceid>BENPR</sourceid><sourceid>CCPQU</sourceid><sourceid>DWQXO</sourceid><recordid>eNqNyssKwjAQheEgCBbtOwRcF9qkF_e1XsCVqNsS7FSmpBNNUsG3V1Fcuzoc_m_EAiFlEi1SISYsdK6L41jkhcgyGTA4Et7BOqX5SekB-BLIoX_wynnslUdDvDWWb3v0n7cDZQnpwhU1fG2UjkpDDb4bNHwPSC9_hh7I_-yMjVulHYTfnbL5qjqUm-hqzW0A5-vODJZeqRYyK5JYpkUu_1NPpHBHIg</recordid><startdate>20200215</startdate><enddate>20200215</enddate><creator>Schroecker, Yannick</creator><creator>Isbell, Charles</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20200215</creationdate><title>Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning</title><author>Schroecker, Yannick ; Isbell, Charles</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_23571034763</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Density</topic><topic>Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Schroecker, Yannick</creatorcontrib><creatorcontrib>Isbell, Charles</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni Edition)</collection><collection>ProQuest Central UK/Ireland</collection><collection>ProQuest Central Essentials</collection><collection>ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central Korea</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>Access via ProQuest (Open Access)</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering Collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Schroecker, Yannick</au><au>Isbell, Charles</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning</atitle><jtitle>arXiv.org</jtitle><date>2020-02-15</date><risdate>2020</risdate><eissn>2331-8422</eissn><abstract>This work considers two distinct settings: imitation learning and goal-conditioned reinforcement learning. In either case, effective solutions require the agent to reliably reach a specified state (a goal), or set of states (a demonstration). Drawing a connection between probabilistic long-term dynamics and the desired value function, this work introduces an approach which utilizes recent advances in density estimation to effectively learn to reach a given state. As our first contribution, we use this approach for goal-conditioned reinforcement learning and show that it is both efficient and does not suffer from hindsight bias in stochastic domains. As our second contribution, we extend the approach to imitation learning and show that it achieves state-of-the art demonstration sample-efficiency on standard benchmark tasks.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2020-02
issn 2331-8422
language eng
recordid cdi_proquest_journals_2357103476
source Free E- Journals
subjects Density
Learning
title Universal Value Density Estimation for Imitation Learning and Goal-Conditioned Reinforcement Learning
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-30T20%3A43%3A58IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=Universal%20Value%20Density%20Estimation%20for%20Imitation%20Learning%20and%20Goal-Conditioned%20Reinforcement%20Learning&rft.jtitle=arXiv.org&rft.au=Schroecker,%20Yannick&rft.date=2020-02-15&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2357103476%3C/proquest%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2357103476&rft_id=info:pmid/&rfr_iscdi=true