SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep Reinforcement Learning
Off-policy deep reinforcement learning (RL) has been successful in a range of challenging domains. However, standard off-policy RL algorithms can suffer from several issues, such as instability in Q-learning and balancing exploration and exploitation. To mitigate these issues, we present SUNRISE, a...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Off-policy deep reinforcement learning (RL) has been successful in a range of
challenging domains. However, standard off-policy RL algorithms can suffer from
several issues, such as instability in Q-learning and balancing exploration and
exploitation. To mitigate these issues, we present SUNRISE, a simple unified
ensemble method, which is compatible with various off-policy RL algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman
backups, which re-weight target Q-values based on uncertainty estimates from a
Q-ensemble, and (b) an inference method that selects actions using the highest
upper-confidence bounds for efficient exploration. By enforcing the diversity
between agents using Bootstrap with random initialization, we show that these
different ideas are largely orthogonal and can be fruitfully integrated,
together further improving the performance of existing off-policy RL
algorithms, such as Soft Actor-Critic and Rainbow DQN, for both continuous and
discrete control tasks on both low-dimensional and high-dimensional
environments. Our training code is available at
https://github.com/pokaxpoka/sunrise. |
---|---|
DOI: | 10.48550/arxiv.2007.04938 |