GOOSE: Goal-Conditioned Reinforcement Learning for Safety-Critical Scenario Generation
Scenario-based testing is considered state-of-the-art for verifying and validating Advanced Driver Assistance Systems (ADASs) and Automated Driving Systems (ADSs). However, the practical application of scenario-based testing requires an efficient method to generate or collect the scenarios that are...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Scenario-based testing is considered state-of-the-art for verifying and
validating Advanced Driver Assistance Systems (ADASs) and Automated Driving
Systems (ADSs). However, the practical application of scenario-based testing
requires an efficient method to generate or collect the scenarios that are
needed for the safety assessment. In this paper, we propose Goal-conditioned
Scenario Generation (GOOSE), a goal-conditioned reinforcement learning (RL)
approach that automatically generates safety-critical scenarios to challenge
ADASs or ADSs. In order to simultaneously set up and optimize scenarios, we
propose to control vehicle trajectories at the scenario level. Each step in the
RL framework corresponds to a scenario simulation. We use Non-Uniform Rational
B-Splines (NURBS) for trajectory modeling. To guide the goal-conditioned agent,
we formulate test-specific, constraint-based goals inspired by the OpenScenario
Domain Specific Language(DSL). Through experiments conducted on multiple
pre-crash scenarios derived from UN Regulation No. 157 for Active Lane Keeping
Systems (ALKS), we demonstrate the effectiveness of GOOSE in generating
scenarios that lead to safety-critical events. |
---|---|
DOI: | 10.48550/arxiv.2406.03870 |