Can we hop in general? A discussion of benchmark selection and design using the Hopper environment
Empirical, benchmark-driven testing is a fundamental paradigm in the current RL community. While using off-the-shelf benchmarks in reinforcement learning (RL) research is a common practice, this choice is rarely discussed. Benchmark choices are often done based on intuitive ideas like "legged r...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Empirical, benchmark-driven testing is a fundamental paradigm in the current
RL community. While using off-the-shelf benchmarks in reinforcement learning
(RL) research is a common practice, this choice is rarely discussed. Benchmark
choices are often done based on intuitive ideas like "legged robots" or "visual
observations". In this paper, we argue that benchmarking in RL needs to be
treated as a scientific discipline itself. To illustrate our point, we present
a case study on different variants of the Hopper environment to show that the
selection of standard benchmarking suites can drastically change how we judge
performance of algorithms. The field does not have a cohesive notion of what
the different Hopper environments are representative - they do not even seem to
be representative of each other. Our experimental results suggests a larger
issue in the deep RL literature: benchmark choices are neither commonly
justified, nor does there exist a language that could be used to justify the
selection of certain environments. This paper concludes with a discussion of
the requirements for proper discussion and evaluations of benchmarks and
recommends steps to start a dialogue towards this goal. |
---|---|
DOI: | 10.48550/arxiv.2410.08870 |