Estimating Q(s,s') with Deep Deterministic Dynamics Gradients

In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to ma...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Edwards, Ashley D, Sahni, Himanshu, Liu, Rosanne, Hung, Jane, Jain, Ankit, Wang, Rui, Ecoffet, Adrien, Miconi, Thomas, Isbell, Charles, Yosinski, Jason
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Edwards, Ashley D
Sahni, Himanshu
Liu, Rosanne
Hung, Jane
Jain, Ankit
Wang, Rui
Ecoffet, Adrien
Miconi, Thomas
Isbell, Charles
Yosinski, Jason
description In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.
doi_str_mv 10.48550/arxiv.2002.09505
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2002_09505</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2002_09505</sourcerecordid><originalsourceid>FETCH-LOGICAL-a675-5a145166c7366bc118d6eea883fcb615ff1616bae0f0cd6ded6e4c5fb96589b43</originalsourceid><addsrcrecordid>eNotjzFPwzAUhL0wVIUf0AlvgERSu8l7dQYG1JYWqRJC6h49O89giUSVbQH994TCcjfc6fSdEDOtytoAqDnF7_BZLpRalKoBBRPxsEk59JTD8CZfb9N9urmTXyG_yzXzcZTMsQ9DGEtOrk8D9cEluY3UBR5yuhQXnj4SX_37VByeNofVrti_bJ9Xj_uCcAkFkK5BI7plhWid1qZDZjKm8s6iBu81arTEyivXYcdjXDvwtkEwja2rqbj-mz3zt8c4EsdT-_ujPf-ofgCgVUJo</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Estimating Q(s,s') with Deep Deterministic Dynamics Gradients</title><source>arXiv.org</source><creator>Edwards, Ashley D ; Sahni, Himanshu ; Liu, Rosanne ; Hung, Jane ; Jain, Ankit ; Wang, Rui ; Ecoffet, Adrien ; Miconi, Thomas ; Isbell, Charles ; Yosinski, Jason</creator><creatorcontrib>Edwards, Ashley D ; Sahni, Himanshu ; Liu, Rosanne ; Hung, Jane ; Jain, Ankit ; Wang, Rui ; Ecoffet, Adrien ; Miconi, Thomas ; Isbell, Charles ; Yosinski, Jason</creatorcontrib><description>In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.</description><identifier>DOI: 10.48550/arxiv.2002.09505</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2020-02</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2002.09505$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2002.09505$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Edwards, Ashley D</creatorcontrib><creatorcontrib>Sahni, Himanshu</creatorcontrib><creatorcontrib>Liu, Rosanne</creatorcontrib><creatorcontrib>Hung, Jane</creatorcontrib><creatorcontrib>Jain, Ankit</creatorcontrib><creatorcontrib>Wang, Rui</creatorcontrib><creatorcontrib>Ecoffet, Adrien</creatorcontrib><creatorcontrib>Miconi, Thomas</creatorcontrib><creatorcontrib>Isbell, Charles</creatorcontrib><creatorcontrib>Yosinski, Jason</creatorcontrib><title>Estimating Q(s,s') with Deep Deterministic Dynamics Gradients</title><description>In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotjzFPwzAUhL0wVIUf0AlvgERSu8l7dQYG1JYWqRJC6h49O89giUSVbQH994TCcjfc6fSdEDOtytoAqDnF7_BZLpRalKoBBRPxsEk59JTD8CZfb9N9urmTXyG_yzXzcZTMsQ9DGEtOrk8D9cEluY3UBR5yuhQXnj4SX_37VByeNofVrti_bJ9Xj_uCcAkFkK5BI7plhWid1qZDZjKm8s6iBu81arTEyivXYcdjXDvwtkEwja2rqbj-mz3zt8c4EsdT-_ujPf-ofgCgVUJo</recordid><startdate>20200221</startdate><enddate>20200221</enddate><creator>Edwards, Ashley D</creator><creator>Sahni, Himanshu</creator><creator>Liu, Rosanne</creator><creator>Hung, Jane</creator><creator>Jain, Ankit</creator><creator>Wang, Rui</creator><creator>Ecoffet, Adrien</creator><creator>Miconi, Thomas</creator><creator>Isbell, Charles</creator><creator>Yosinski, Jason</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20200221</creationdate><title>Estimating Q(s,s') with Deep Deterministic Dynamics Gradients</title><author>Edwards, Ashley D ; Sahni, Himanshu ; Liu, Rosanne ; Hung, Jane ; Jain, Ankit ; Wang, Rui ; Ecoffet, Adrien ; Miconi, Thomas ; Isbell, Charles ; Yosinski, Jason</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a675-5a145166c7366bc118d6eea883fcb615ff1616bae0f0cd6ded6e4c5fb96589b43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Edwards, Ashley D</creatorcontrib><creatorcontrib>Sahni, Himanshu</creatorcontrib><creatorcontrib>Liu, Rosanne</creatorcontrib><creatorcontrib>Hung, Jane</creatorcontrib><creatorcontrib>Jain, Ankit</creatorcontrib><creatorcontrib>Wang, Rui</creatorcontrib><creatorcontrib>Ecoffet, Adrien</creatorcontrib><creatorcontrib>Miconi, Thomas</creatorcontrib><creatorcontrib>Isbell, Charles</creatorcontrib><creatorcontrib>Yosinski, Jason</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Edwards, Ashley D</au><au>Sahni, Himanshu</au><au>Liu, Rosanne</au><au>Hung, Jane</au><au>Jain, Ankit</au><au>Wang, Rui</au><au>Ecoffet, Adrien</au><au>Miconi, Thomas</au><au>Isbell, Charles</au><au>Yosinski, Jason</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Estimating Q(s,s') with Deep Deterministic Dynamics Gradients</atitle><date>2020-02-21</date><risdate>2020</risdate><abstract>In this paper, we introduce a novel form of value function, $Q(s, s')$, that expresses the utility of transitioning from a state $s$ to a neighboring state $s'$ and then acting optimally thereafter. In order to derive an optimal policy, we develop a forward dynamics model that learns to make next-state predictions that maximize this value. This formulation decouples actions from values while still learning off-policy. We highlight the benefits of this approach in terms of value function transfer, learning within redundant action spaces, and learning off-policy from state observations generated by sub-optimal or completely random policies. Code and videos are available at http://sites.google.com/view/qss-paper.</abstract><doi>10.48550/arxiv.2002.09505</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2002.09505
ispartof
issn
language eng
recordid cdi_arxiv_primary_2002_09505
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
Statistics - Machine Learning
title Estimating Q(s,s') with Deep Deterministic Dynamics Gradients
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-02T20%3A52%3A16IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Estimating%20Q(s,s')%20with%20Deep%20Deterministic%20Dynamics%20Gradients&rft.au=Edwards,%20Ashley%20D&rft.date=2020-02-21&rft_id=info:doi/10.48550/arxiv.2002.09505&rft_dat=%3Carxiv_GOX%3E2002_09505%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true