Phasic Diversity Optimization for Population-Based Reinforcement Learning
Reviewing the previous work of diversity Rein-forcement Learning,diversity is often obtained via an augmented loss function,which requires a balance between reward and diversity.Generally,diversity optimization algorithms use Multi-armed Bandits algorithms to select the coefficient in the pre-define...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Reviewing the previous work of diversity Rein-forcement Learning,diversity is
often obtained via an augmented loss function,which requires a balance between
reward and diversity.Generally,diversity optimization algorithms use
Multi-armed Bandits algorithms to select the coefficient in the pre-defined
space. However, the dynamic distribution of reward signals for MABs or the
conflict between quality and diversity limits the performance of these methods.
We introduce the Phasic Diversity Optimization (PDO) algorithm, a
Population-Based Training framework that separates reward and diversity
training into distinct phases instead of optimizing a multi-objective function.
In the auxiliary phase, agents with poor performance diversified via
determinants will not replace the better agents in the archive. The decoupling
of reward and diversity allows us to use an aggressive diversity optimization
in the auxiliary phase without performance degradation. Furthermore, we
construct a dogfight scenario for aerial agents to demonstrate the practicality
of the PDO algorithm. We introduce two implementations of PDO archive and
conduct tests in the newly proposed adversarial dogfight and MuJoCo
simulations. The results show that our proposed algorithm achieves better
performance than baselines. |
---|---|
DOI: | 10.48550/arxiv.2403.11114 |