Fast Efficient Hyperparameter Tuning for Policy Gradients
The performance of policy gradient methods is sensitive to hyperparameter settings that must be tuned for any new application. Widely used grid search methods for tuning hyperparameters are sample inefficient and computationally expensive. More advanced methods like Population Based Training that le...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The performance of policy gradient methods is sensitive to hyperparameter
settings that must be tuned for any new application. Widely used grid search
methods for tuning hyperparameters are sample inefficient and computationally
expensive. More advanced methods like Population Based Training that learn
optimal schedules for hyperparameters instead of fixed settings can yield
better results, but are also sample inefficient and computationally expensive.
In this paper, we propose Hyperparameter Optimisation on the Fly (HOOF), a
gradient-free algorithm that requires no more than one training run to
automatically adapt the hyperparameter that affect the policy update directly
through the gradient. The main idea is to use existing trajectories sampled by
the policy gradient method to optimise a one-step improvement objective,
yielding a sample and computationally efficient algorithm that is easy to
implement. Our experimental results across multiple domains and algorithms show
that using HOOF to learn these hyperparameter schedules leads to faster
learning with improved performance. |
---|---|
DOI: | 10.48550/arxiv.1902.06583 |