Revisiting Rainbow: Promoting more Insightful and Inclusive Deep Reinforcement Learning Research
Since the introduction of DQN, a vast majority of reinforcement learning research has focused on reinforcement learning with deep neural networks as function approximators. New methods are typically evaluated on a set of environments that have now become standard, such as Atari 2600 games. While the...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Since the introduction of DQN, a vast majority of reinforcement learning
research has focused on reinforcement learning with deep neural networks as
function approximators. New methods are typically evaluated on a set of
environments that have now become standard, such as Atari 2600 games. While
these benchmarks help standardize evaluation, their computational cost has the
unfortunate side effect of widening the gap between those with ample access to
computational resources, and those without. In this work we argue that, despite
the community's emphasis on large-scale environments, the traditional
small-scale environments can still yield valuable scientific insights and can
help reduce the barriers to entry for underprivileged communities. To
substantiate our claims, we empirically revisit the paper which introduced the
Rainbow algorithm [Hessel et al., 2018] and present some new insights into the
algorithms used by Rainbow. |
---|---|
DOI: | 10.48550/arxiv.2011.14826 |