Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation

Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. We propose two novel data augmentation techniques for DRL in order to reuse more efficiently observed data. The first one c...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Lin, Yijiong, Huang, Jiancong, Zimmer, Matthieu, Rojas, Juan, Weng, Paul
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Lin, Yijiong
Huang, Jiancong
Zimmer, Matthieu
Rojas, Juan
Weng, Paul
description Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. We propose two novel data augmentation techniques for DRL in order to reuse more efficiently observed data. The first one called Kaleidoscope Experience Replay exploits reflectional symmetries, while the second called Goal-augmented Experience Replay takes advantage of lax goal definitions. Our preliminary experimental results show a large increase in learning speed.
doi_str_mv 10.48550/arxiv.1910.09959
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_1910_09959</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>1910_09959</sourcerecordid><originalsourceid>FETCH-LOGICAL-a679-3ff411beebfd2f3b5a50e644a32cce411ac2a1cd437e1ca955d46617743fe43</originalsourceid><addsrcrecordid>eNotj8tOwzAQRb1hgQofwIr5gZQ4thO8rEp5SEGVSvfRxJkpIzVO5QZK_562sLrSPdKRjlJ3Op_aR-fyB0w_8j3V_nTk3jt_rZbr4YCp28P7kAg-sN9tCRbMEoRiOIJEWJFEHlKgnuIINWGKEjdwkPETnnBEmH1tzghHGeKNumLc7un2fydq9bxYz1-zevnyNp_VGZaVzwyz1bolarkr2LQOXU6ltWiKEOiEMBSoQ2dNRTqgd66zZamryhomaybq_k966Wl2SXpMx-bc1Vy6zC-l3Uln</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation</title><source>arXiv.org</source><creator>Lin, Yijiong ; Huang, Jiancong ; Zimmer, Matthieu ; Rojas, Juan ; Weng, Paul</creator><creatorcontrib>Lin, Yijiong ; Huang, Jiancong ; Zimmer, Matthieu ; Rojas, Juan ; Weng, Paul</creatorcontrib><description>Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. We propose two novel data augmentation techniques for DRL in order to reuse more efficiently observed data. The first one called Kaleidoscope Experience Replay exploits reflectional symmetries, while the second called Goal-augmented Experience Replay takes advantage of lax goal definitions. Our preliminary experimental results show a large increase in learning speed.</description><identifier>DOI: 10.48550/arxiv.1910.09959</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Robotics</subject><creationdate>2019-10</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/1910.09959$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.1910.09959$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Lin, Yijiong</creatorcontrib><creatorcontrib>Huang, Jiancong</creatorcontrib><creatorcontrib>Zimmer, Matthieu</creatorcontrib><creatorcontrib>Rojas, Juan</creatorcontrib><creatorcontrib>Weng, Paul</creatorcontrib><title>Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation</title><description>Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. We propose two novel data augmentation techniques for DRL in order to reuse more efficiently observed data. The first one called Kaleidoscope Experience Replay exploits reflectional symmetries, while the second called Goal-augmented Experience Replay takes advantage of lax goal definitions. Our preliminary experimental results show a large increase in learning speed.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Robotics</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2019</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOwzAQRb1hgQofwIr5gZQ4thO8rEp5SEGVSvfRxJkpIzVO5QZK_562sLrSPdKRjlJ3Op_aR-fyB0w_8j3V_nTk3jt_rZbr4YCp28P7kAg-sN9tCRbMEoRiOIJEWJFEHlKgnuIINWGKEjdwkPETnnBEmH1tzghHGeKNumLc7un2fydq9bxYz1-zevnyNp_VGZaVzwyz1bolarkr2LQOXU6ltWiKEOiEMBSoQ2dNRTqgd66zZamryhomaybq_k966Wl2SXpMx-bc1Vy6zC-l3Uln</recordid><startdate>20191018</startdate><enddate>20191018</enddate><creator>Lin, Yijiong</creator><creator>Huang, Jiancong</creator><creator>Zimmer, Matthieu</creator><creator>Rojas, Juan</creator><creator>Weng, Paul</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20191018</creationdate><title>Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation</title><author>Lin, Yijiong ; Huang, Jiancong ; Zimmer, Matthieu ; Rojas, Juan ; Weng, Paul</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a679-3ff411beebfd2f3b5a50e644a32cce411ac2a1cd437e1ca955d46617743fe43</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2019</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Robotics</topic><toplevel>online_resources</toplevel><creatorcontrib>Lin, Yijiong</creatorcontrib><creatorcontrib>Huang, Jiancong</creatorcontrib><creatorcontrib>Zimmer, Matthieu</creatorcontrib><creatorcontrib>Rojas, Juan</creatorcontrib><creatorcontrib>Weng, Paul</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Lin, Yijiong</au><au>Huang, Jiancong</au><au>Zimmer, Matthieu</au><au>Rojas, Juan</au><au>Weng, Paul</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation</atitle><date>2019-10-18</date><risdate>2019</risdate><abstract>Deep reinforcement learning (DRL) is a promising approach for adaptive robot control, but its current application to robotics is currently hindered by high sample requirements. We propose two novel data augmentation techniques for DRL in order to reuse more efficiently observed data. The first one called Kaleidoscope Experience Replay exploits reflectional symmetries, while the second called Goal-augmented Experience Replay takes advantage of lax goal definitions. Our preliminary experimental results show a large increase in learning speed.</abstract><doi>10.48550/arxiv.1910.09959</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.1910.09959
ispartof
issn
language eng
recordid cdi_arxiv_primary_1910_09959
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Robotics
title Towards More Sample Efficiency in Reinforcement Learning with Data Augmentation
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-04T18%3A42%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Towards%20More%20Sample%20Efficiency%20in%20Reinforcement%20Learning%20with%20Data%20Augmentation&rft.au=Lin,%20Yijiong&rft.date=2019-10-18&rft_id=info:doi/10.48550/arxiv.1910.09959&rft_dat=%3Carxiv_GOX%3E1910_09959%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true