Constraints Driven Safe Reinforcement Learning for Autonomous Driving Decision-Making
Although reinforcement learning (RL) methodologies exhibit potential in addressing decision-making and planning problems in autonomous driving, ensuring the safety of the vehicle under all circumstances remains a formidable challenge in practical applications. Current RL methods are predominantly dr...
Gespeichert in:
Veröffentlicht in: | IEEE access 2024, Vol.12, p.128007-128023 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Although reinforcement learning (RL) methodologies exhibit potential in addressing decision-making and planning problems in autonomous driving, ensuring the safety of the vehicle under all circumstances remains a formidable challenge in practical applications. Current RL methods are predominantly driven by singular reward mechanisms, frequently encountering difficulties in balancing multiple sub-rewards such as safety, comfort, and efficiency. To address these limitations, this paper introduces a constraint-driven safety RL method, applied to decision-making and planning policy in highway scenarios. This method ensures decisions maximize performance rewards within the bounds of safety constraints, exhibiting exceptional robustness. Initially, the framework reformulates the autonomous driving decision-making problem as a Constrained Markov Decision Process (CMDP) within the safety RL framework. It then introduces a Multi-Level Safety-Constrained Policy Optimization (MLSCPO) method, incorporating a cost function to address safety constraints. Ultimately, simulated tests conducted within the CARLA environment demonstrate that the proposed method MLSCPO outperforms the current advanced safe reinforcement learning policy, Proximal Policy Optimization with Lagrangian (PPO-Lag) and the traditional stable longitudinal and lateral autonomous driving model, Intelligent Driver Model with Minimization of Overall Braking Induced by Lane Changes (IDM+MOBIL). Compared to the classic IDM+MOBIL method, the proposed approach not only achieves efficient driving but also offers a better driving experience. In comparison with the reinforcement learning method PPO-Lag, it significantly enhances safety while ensuring driving efficiency, achieving a zero-collision rate. In the future, we will integrate the aforementioned potential expansion plans to enhance the usability and generalization capabilities of the method in real-world applications. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2024.3454249 |