Robust Deep Reinforcement Learning with Adaptive Adversarial Perturbations in Action Space
Deep reinforcement learning (DRL) algorithms can suffer from modeling errors between the simulation and the real world. Many studies use adversarial learning to generate perturbation during training process to model the discrepancy and improve the robustness of DRL. However, most of these approaches...
Gespeichert in:
Hauptverfasser: | , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep reinforcement learning (DRL) algorithms can suffer from modeling errors
between the simulation and the real world. Many studies use adversarial
learning to generate perturbation during training process to model the
discrepancy and improve the robustness of DRL. However, most of these
approaches use a fixed parameter to control the intensity of the adversarial
perturbation, which can lead to a trade-off between average performance and
robustness. In fact, finding the optimal parameter of the perturbation is
challenging, as excessive perturbations may destabilize training and compromise
agent performance, while insufficient perturbations may not impart enough
information to enhance robustness. To keep the training stable while improving
robustness, we propose a simple but effective method, namely, Adaptive
Adversarial Perturbation (A2P), which can dynamically select appropriate
adversarial perturbations for each sample. Specifically, we propose an adaptive
adversarial coefficient framework to adjust the effect of the adversarial
perturbation during training. By designing a metric for the current intensity
of the perturbation, our method can calculate the suitable perturbation levels
based on the current relative performance. The appealing feature of our method
is that it is simple to deploy in real-world applications and does not require
accessing the simulator in advance. The experiments in MuJoCo show that our
method can improve the training stability and learn a robust policy when
migrated to different test environments. The code is available at
https://github.com/Lqm00/A2P-SAC. |
---|---|
DOI: | 10.48550/arxiv.2405.11982 |