Poisoning Attacks on Federated Learning for Autonomous Driving
Federated Learning (FL) is a decentralized learning paradigm, enabling parties to collaboratively train models while keeping their data confidential. Within autonomous driving, it brings the potential of reducing data storage costs, reducing bandwidth requirements, and to accelerate the learning. FL...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Federated Learning (FL) is a decentralized learning paradigm, enabling
parties to collaboratively train models while keeping their data confidential.
Within autonomous driving, it brings the potential of reducing data storage
costs, reducing bandwidth requirements, and to accelerate the learning. FL is,
however, susceptible to poisoning attacks. In this paper, we introduce two
novel poisoning attacks on FL tailored to regression tasks within autonomous
driving: FLStealth and Off-Track Attack (OTA). FLStealth, an untargeted attack,
aims at providing model updates that deteriorate the global model performance
while appearing benign. OTA, on the other hand, is a targeted attack with the
objective to change the global model's behavior when exposed to a certain
trigger. We demonstrate the effectiveness of our attacks by conducting
comprehensive experiments pertaining to the task of vehicle trajectory
prediction. In particular, we show that, among five different untargeted
attacks, FLStealth is the most successful at bypassing the considered defenses
employed by the server. For OTA, we demonstrate the inability of common defense
strategies to mitigate the attack, highlighting the critical need for new
defensive mechanisms against targeted attacks within FL for autonomous driving. |
---|---|
DOI: | 10.48550/arxiv.2405.01073 |