Deep Residual Reinforcement Learning
We revisit residual algorithms in both model-free and model-based reinforcement learning settings. We propose the bidirectional target network technique to stabilize residual algorithms, yielding a residual version of DDPG that significantly outperforms vanilla DDPG in the DeepMind Control Suite ben...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | |
---|---|
container_issue | |
container_start_page | |
container_title | |
container_volume | |
creator | Zhang, Shangtong Boehmer, Wendelin Whiteson, Shimon |
description | We revisit residual algorithms in both model-free and model-based
reinforcement learning settings. We propose the bidirectional target network
technique to stabilize residual algorithms, yielding a residual version of DDPG
that significantly outperforms vanilla DDPG in the DeepMind Control Suite
benchmark. Moreover, we find the residual algorithm an effective approach to
the distribution mismatch problem in model-based planning. Compared with the
existing TD($k$) method, our residual-based method makes weaker assumptions
about the model and yields a greater performance boost. |
doi_str_mv | 10.48550/arxiv.1905.01072 |
format | Article |
fullrecord | <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1905_01072</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1905_01072</sourcerecordid><originalsourceid>FETCH-LOGICAL-a672-5918ce5d391b2c823d87877c807ba7107f9c95bd3b675d940a18f320664c90943</originalsourceid><addsrcrecordid>eNotzrsKwjAYBeAsDlJ9ACc7uLb-uTXJKN6hIEj3kiapBNpa4gV9e6_TOdM5H0ITDCmTnMNch4e_p1gBTwGDIEM0WznXx0d38famm3fxXX0OxrWuu8a506Hz3WmEBrVuLm78zwgVm3Wx3CX5YbtfLvJEZ4IkXGFpHLdU4YoYSaiVQgphJIhKi_ddrYzilaVVJrhVDDSWNSWQZcwoUIxGaPqb_TLLPvhWh2f54ZZfLn0Bag04Mw</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Deep Residual Reinforcement Learning</title><source>arXiv.org</source><creator>Zhang, Shangtong ; Boehmer, Wendelin ; Whiteson, Shimon</creator><creatorcontrib>Zhang, Shangtong ; Boehmer, Wendelin ; Whiteson, Shimon</creatorcontrib><description>We revisit residual algorithms in both model-free and model-based
reinforcement learning settings. We propose the bidirectional target network
technique to stabilize residual algorithms, yielding a residual version of DDPG
that significantly outperforms vanilla DDPG in the DeepMind Control Suite
benchmark. Moreover, we find the residual algorithm an effective approach to
the distribution mismatch problem in model-based planning. Compared with the
existing TD($k$) method, our residual-based method makes weaker assumptions
about the model and yields a greater performance boost.</description><identifier>DOI: 10.48550/arxiv.1905.01072</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning ; Statistics - Machine Learning</subject><creationdate>2019-05</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,777,882</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1905.01072$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1905.01072$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Zhang, Shangtong</creatorcontrib><creatorcontrib>Boehmer, Wendelin</creatorcontrib><creatorcontrib>Whiteson, Shimon</creatorcontrib><title>Deep Residual Reinforcement Learning</title><description>We revisit residual algorithms in both model-free and model-based
reinforcement learning settings. We propose the bidirectional target network
technique to stabilize residual algorithms, yielding a residual version of DDPG
that significantly outperforms vanilla DDPG in the DeepMind Control Suite
benchmark. Moreover, we find the residual algorithm an effective approach to
the distribution mismatch problem in model-based planning. Compared with the
existing TD($k$) method, our residual-based method makes weaker assumptions
about the model and yields a greater performance boost.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><subject>Statistics - Machine Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotzrsKwjAYBeAsDlJ9ACc7uLb-uTXJKN6hIEj3kiapBNpa4gV9e6_TOdM5H0ITDCmTnMNch4e_p1gBTwGDIEM0WznXx0d38famm3fxXX0OxrWuu8a506Hz3WmEBrVuLm78zwgVm3Wx3CX5YbtfLvJEZ4IkXGFpHLdU4YoYSaiVQgphJIhKi_ddrYzilaVVJrhVDDSWNSWQZcwoUIxGaPqb_TLLPvhWh2f54ZZfLn0Bag04Mw</recordid><startdate>20190503</startdate><enddate>20190503</enddate><creator>Zhang, Shangtong</creator><creator>Boehmer, Wendelin</creator><creator>Whiteson, Shimon</creator><scope>AKY</scope><scope>EPD</scope><scope>GOX</scope></search><sort><creationdate>20190503</creationdate><title>Deep Residual Reinforcement Learning</title><author>Zhang, Shangtong ; Boehmer, Wendelin ; Whiteson, Shimon</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a672-5918ce5d391b2c823d87877c807ba7107f9c95bd3b675d940a18f320664c90943</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><topic>Statistics - Machine Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Shangtong</creatorcontrib><creatorcontrib>Boehmer, Wendelin</creatorcontrib><creatorcontrib>Whiteson, Shimon</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv Statistics</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Zhang, Shangtong</au><au>Boehmer, Wendelin</au><au>Whiteson, Shimon</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Deep Residual Reinforcement Learning</atitle><date>2019-05-03</date><risdate>2019</risdate><abstract>We revisit residual algorithms in both model-free and model-based
reinforcement learning settings. We propose the bidirectional target network
technique to stabilize residual algorithms, yielding a residual version of DDPG
that significantly outperforms vanilla DDPG in the DeepMind Control Suite
benchmark. Moreover, we find the residual algorithm an effective approach to
the distribution mismatch problem in model-based planning. Compared with the
existing TD($k$) method, our residual-based method makes weaker assumptions
about the model and yields a greater performance boost.</abstract><doi>10.48550/arxiv.1905.01072</doi><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | DOI: 10.48550/arxiv.1905.01072 |
ispartof | |
issn | |
language | eng |
recordid | cdi_arxiv_primary_1905_01072 |
source | arXiv.org |
subjects | Computer Science - Artificial Intelligence Computer Science - Learning Statistics - Machine Learning |
title | Deep Residual Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-18T03%3A44%3A55IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Deep%20Residual%20Reinforcement%20Learning&rft.au=Zhang,%20Shangtong&rft.date=2019-05-03&rft_id=info:doi/10.48550/arxiv.1905.01072&rft_dat=%3Carxiv_GOX%3E1905_01072%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |