A study of natural robustness of deep reinforcement learning algorithms towards adversarial perturbations
Deep reinforcement learning (DRL) has been shown to have numerous potential applications in the real world. However, DRL algorithms are still extremely sensitive to noise and adversarial perturbations, hence inhibiting the deployment of RL in many real-life applications. Analyzing the robustness of...
Gespeichert in:
Veröffentlicht in: | AI open 2024, Vol.5, p.126-141 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep reinforcement learning (DRL) has been shown to have numerous potential applications in the real world. However, DRL algorithms are still extremely sensitive to noise and adversarial perturbations, hence inhibiting the deployment of RL in many real-life applications. Analyzing the robustness of DRL algorithms to adversarial attacks is an important prerequisite to enabling the widespread adoption of DRL algorithms. Common perturbations on DRL frameworks during test time include perturbations to the observation and the action channel. Compared with observation channel attacks, action channel attacks are less studied; hence, few comparisons exist that compare the effectiveness of these attacks in DRL literature. In this work, we examined the effectiveness of these two paradigms of attacks on common DRL algorithms and studied the natural robustness of DRL algorithms towards various adversarial attacks in hopes of gaining insights into the individual response of each type of algorithm under different attack conditions.
•Evaluated RL agents’ performance in various attack scenarios and gym environments.•Identified optimal perturbation thresholds to ensure robustness against adversarial attacks.•Ranked RL agents by sensitivity and robustness after adversarial attacks.•Analyzed diverse attack strategies to assess their impact on system integrity. |
---|---|
ISSN: | 2666-6510 2666-6510 |
DOI: | 10.1016/j.aiopen.2024.08.005 |