Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research

Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While the...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Obando-Ceron, Johan S, Castro, Pablo Samuel
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Obando-Ceron, Johan S
Castro, Pablo Samuel
description Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.
doi_str_mv 10.48550/arxiv.2011.14826
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2011_14826</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2011_14826</sourcerecordid><originalsourceid>FETCH-LOGICAL-a676-b0c939d441910ca330201ef819d7f671b374071ac378bae21d541860c8d261523</originalsourceid><addsrcrecordid>eNotj8tOhDAYhbtxYUYfwJV9AbB_W9rizoy3SUg0ZPZYys9MEyiTwqC-vYiuzmVxcj5CboCl0mQZu7Pxy88pZwApSMPVJfkocfajn3w40NL6UA-f9_Q9Dv2wVv0Qke7C6A_HqT131IZmia47j35G-oh4oiX60A7RYY9hogXaGNYxHBfrjlfkorXdiNf_uiH756f99jUp3l5224cisUqrpGYuF3kjJeTAnBWCLSexNZA3ulUaaqEl02Cd0Ka2yKHJJBjFnGm4goyLDbn9m10Rq1P0vY3f1S9qtaKKH_boTrc</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research</title><source>arXiv.org</source><creator>Obando-Ceron, Johan S ; Castro, Pablo Samuel</creator><creatorcontrib>Obando-Ceron, Johan S ; Castro, Pablo Samuel</creatorcontrib><description>Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.</description><identifier>DOI: 10.48550/arxiv.2011.14826</identifier><language>eng</language><subject>Computer Science - Artificial Intelligence ; Computer Science - Learning</subject><creationdate>2020-11</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,776,881</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2011.14826$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2011.14826$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Obando-Ceron, Johan S</creatorcontrib><creatorcontrib>Castro, Pablo Samuel</creatorcontrib><title>Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research</title><description>Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.</description><subject>Computer Science - Artificial Intelligence</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2020</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotj8tOhDAYhbtxYUYfwJV9AbB_W9rizoy3SUg0ZPZYys9MEyiTwqC-vYiuzmVxcj5CboCl0mQZu7Pxy88pZwApSMPVJfkocfajn3w40NL6UA-f9_Q9Dv2wVv0Qke7C6A_HqT131IZmia47j35G-oh4oiX60A7RYY9hogXaGNYxHBfrjlfkorXdiNf_uiH756f99jUp3l5224cisUqrpGYuF3kjJeTAnBWCLSexNZA3ulUaaqEl02Cd0Ka2yKHJJBjFnGm4goyLDbn9m10Rq1P0vY3f1S9qtaKKH_boTrc</recordid><startdate>20201120</startdate><enddate>20201120</enddate><creator>Obando-Ceron, Johan S</creator><creator>Castro, Pablo Samuel</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20201120</creationdate><title>Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research</title><author>Obando-Ceron, Johan S ; Castro, Pablo Samuel</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a676-b0c939d441910ca330201ef819d7f671b374071ac378bae21d541860c8d261523</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2020</creationdate><topic>Computer Science - Artificial Intelligence</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Obando-Ceron, Johan S</creatorcontrib><creatorcontrib>Castro, Pablo Samuel</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Obando-Ceron, Johan S</au><au>Castro, Pablo Samuel</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research</atitle><date>2020-11-20</date><risdate>2020</risdate><abstract>Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While these benchmarks help standardize evaluation, their computational cost has the unfortunate side effect of widening the gap between those with ample access to computational resources, and those without. In this work we argue that, despite the community's emphasis on large-scale environments, the traditional small-scale environments can still yield valuable scientific insights and can help reduce the barriers to entry for underprivileged communities. To substantiate our claims, we empirically revisit the paper which introduced the Rainbow algorithm [Hessel et al., 2018] and present some new insights into the algorithms used by Rainbow.</abstract><doi>10.48550/arxiv.2011.14826</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2011.14826
ispartof
issn
language eng
recordid cdi_arxiv_primary_2011_14826
source arXiv.org
subjects Computer Science - Artificial Intelligence
Computer Science - Learning
title Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-02-05T08%3A10%3A21IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Revisiting%20Rainbow:%20Promoting%20more%20Insightful%20and%20Inclusive%20Deep%20Reinforcement%20Learning%20Research&rft.au=Obando-Ceron,%20Johan%20S&rft.date=2020-11-20&rft_id=info:doi/10.48550/arxiv.2011.14826&rft_dat=%3Carxiv_GOX%3E2011_14826%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true