ARLBench: Flexible and Efficient Benchmarking for Hyperparameter Optimization in Reinforcement Learning
17th European Workshop on Reinforcement Learning 2024 Hyperparameters are a critical factor in reliably training well-performing reinforcement learning (RL) agents. Unfortunately, developing and evaluating automated approaches for tuning such hyperparameters is both costly and time-consuming. As a r...
Gespeichert in:
Hauptverfasser: | , , , , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | 17th European Workshop on Reinforcement Learning 2024 Hyperparameters are a critical factor in reliably training well-performing
reinforcement learning (RL) agents. Unfortunately, developing and evaluating
automated approaches for tuning such hyperparameters is both costly and
time-consuming. As a result, such approaches are often only evaluated on a
single domain or algorithm, making comparisons difficult and limiting insights
into their generalizability. We propose ARLBench, a benchmark for
hyperparameter optimization (HPO) in RL that allows comparisons of diverse HPO
approaches while being highly efficient in evaluation. To enable research into
HPO in RL, even in settings with low compute resources, we select a
representative subset of HPO tasks spanning a variety of algorithm and
environment combinations. This selection allows for generating a performance
profile of an automated RL (AutoRL) method using only a fraction of the compute
previously necessary, enabling a broader range of researchers to work on HPO in
RL. With the extensive and large-scale dataset on hyperparameter landscapes
that our selection is based on, ARLBench is an efficient, flexible, and
future-oriented foundation for research on AutoRL. Both the benchmark and the
dataset are available at https://github.com/automl/arlbench. |
---|---|
DOI: | 10.48550/arxiv.2409.18827 |