Dynamic Learning Rate for Deep Reinforcement Learning: A Bandit Approach
In Deep Reinforcement Learning models trained using gradient-based techniques, the choice of optimizer and its learning rate are crucial to achieving good performance: higher learning rates can prevent the model from learning effectively, while lower ones might slow convergence. Additionally, due to...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In Deep Reinforcement Learning models trained using gradient-based
techniques, the choice of optimizer and its learning rate are crucial to
achieving good performance: higher learning rates can prevent the model from
learning effectively, while lower ones might slow convergence. Additionally,
due to the non-stationarity of the objective function, the best-performing
learning rate can change over the training steps. To adapt the learning rate, a
standard technique consists of using decay schedulers. However, these
schedulers assume that the model is progressively approaching convergence,
which may not always be true, leading to delayed or premature adjustments. In
this work, we propose dynamic Learning Rate for deep Reinforcement Learning
(LRRL), a meta-learning approach that selects the learning rate based on the
agent's performance during training. LRRL is based on a multi-armed bandit
algorithm, where each arm represents a different learning rate, and the bandit
feedback is provided by the cumulative returns of the RL policy to update the
arms' probability distribution. Our empirical results demonstrate that LRRL can
substantially improve the performance of deep RL algorithms. |
---|---|
DOI: | 10.48550/arxiv.2410.12598 |