Dense reinforcement learning for safety validation of autonomous vehicles

One critical bottleneck that impedes the development and deployment of autonomous vehicles is the prohibitively high economic and time costs required to validate their safety in a naturalistic driving environment, owing to the rarity of safety-critical events 1 . Here we report the development of an...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Nature (London) 2023-03, Vol.615 (7953), p.620-627
Hauptverfasser: Feng, Shuo, Sun, Haowei, Yan, Xintao, Zhu, Haojie, Zou, Zhengxia, Shen, Shengyin, Liu, Henry X.
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:One critical bottleneck that impedes the development and deployment of autonomous vehicles is the prohibitively high economic and time costs required to validate their safety in a naturalistic driving environment, owing to the rarity of safety-critical events 1 . Here we report the development of an intelligent testing environment, where artificial-intelligence-based background agents are trained to validate the safety performances of autonomous vehicles in an accelerated mode, without loss of unbiasedness. From naturalistic driving data, the background agents learn what adversarial manoeuvre to execute through a dense deep-reinforcement-learning (D2RL) approach, in which Markov decision processes are edited by removing non-safety-critical states and reconnecting critical ones so that the information in the training data is densified. D2RL enables neural networks to learn from densified information with safety-critical events and achieves tasks that are intractable for traditional deep-reinforcement-learning approaches. We demonstrate the effectiveness of our approach by testing a highly automated vehicle in both highway and urban test tracks with an augmented-reality environment, combining simulated background vehicles with physical road infrastructure and a real autonomous test vehicle. Our results show that the D2RL-trained agents can accelerate the evaluation process by multiple orders of magnitude (10 3 to 10 5 times faster). In addition, D2RL will enable accelerated testing and training with other safety-critical autonomous systems. An intelligent environment has been developed for testing the safety performance of autonomous vehicles and its effectiveness has been demonstrated for highway and urban test tracks in an augmented-reality environment.
ISSN:0028-0836
1476-4687
DOI:10.1038/s41586-023-05732-2