Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security
Adversarial attacks on learning-based multi-modal trajectory predictors have already been demonstrated. However, there are still open questions about the effects of perturbations on inputs other than state histories, and how these attacks impact downstream planning and control. In this paper, we con...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Adversarial attacks on learning-based multi-modal trajectory predictors have
already been demonstrated. However, there are still open questions about the
effects of perturbations on inputs other than state histories, and how these
attacks impact downstream planning and control. In this paper, we conduct a
sensitivity analysis on two trajectory prediction models, Trajectron++ and
AgentFormer. The analysis reveals that between all inputs, almost all of the
perturbation sensitivities for both models lie only within the most recent
position and velocity states. We additionally demonstrate that, despite
dominant sensitivity on state history perturbations, an undetectable image map
perturbation made with the Fast Gradient Sign Method can induce large
prediction error increases in both models, revealing that these trajectory
predictors are, in fact, susceptible to image-based attacks. Using an
optimization-based planner and example perturbations crafted from sensitivity
results, we show how these attacks can cause a vehicle to come to a sudden stop
from moderate driving speeds. |
---|---|
DOI: | 10.48550/arxiv.2401.10313 |