Improving model robustness of traffic crash risk evaluation via adversarial mix-up under traffic flow fundamental diagram

•Proposed a traffic flow adversarial example (TF-AE) generation method via mix-up under traffic flow fundamental diagram.•With the developed TF-AEs to evaluate model robustness, model accuracy decreased by 8.0% and sensitivity dropped by 18.0%.•Developed a coverage-oriented adversarial training to i...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Accident analysis and prevention 2024-01, Vol.194, p.107360-107360, Article 107360
Hauptverfasser: Yu, Rongjie, Han, Lei, Abdel-Aty, Mohamed, Wang, Liqiang, Zou, Zihang
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:•Proposed a traffic flow adversarial example (TF-AE) generation method via mix-up under traffic flow fundamental diagram.•With the developed TF-AEs to evaluate model robustness, model accuracy decreased by 8.0% and sensitivity dropped by 18.0%.•Developed a coverage-oriented adversarial training to improve robustness, avoiding 76% accuracy, 98% sensitivity drops.•The crash risk evaluation model with adversarial training had more stable outputs to real-world traffic dynamic fluctuations. Recent state-of-art crash risk evaluation studies have exploited deep learning (DL) techniques to improve performance in identifying high-risk traffic operation statuses. However, it is doubtful if such DL-based models would remain robust to real-world traffic dynamics (e.g., random traffic fluctuations.) as DL models are sensitive to input changes, where small perturbations could lead to wrong predictions. This study raises the critical robustness issue for crash risk evaluation models and investigates countermeasures to enhance it. By mixing up crash and non-crash samples under the traffic flow fundamental diagram, traffic flow adversarial examples (TF-AEs) were generated to simulate real-world traffic fluctuations. With the developed TF-AEs, model accuracy decreased by 8% and sensitivity dropped by 18%, indicating weak robustness of the baseline model (a convolutional neural network, CNN-based crash risk evaluation model). Then, a coverage-oriented adversarial training method was proposed to improve model robustness in highly imbalanced crash and non-crash situations and various crash risk transition patterns. Experiments showed that the proposed method was effective to improve model robustness as it could prevent 76.5% accuracy drops and 98.9% sensitivity drops against TF-AEs. Finally, the evaluation model outputs’ stability and limitations of the current study are discussed.
ISSN:0001-4575
1879-2057
DOI:10.1016/j.aap.2023.107360