Tutorial: Toward Robust Deep Learning against Poisoning Attacks

Deep Learning (DL) has been increasingly deployed in various real-world applications due to its unprecedented performance and automated capability of learning hidden representations. While DL can achieve high task performance, the training process of a DL model is both time- and resource-consuming....

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:ACM transactions on embedded computing systems 2023-04, Vol.22 (3), p.1-15, Article 42
Hauptverfasser: Chen, Huili, Koushanfar, Farinaz
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep Learning (DL) has been increasingly deployed in various real-world applications due to its unprecedented performance and automated capability of learning hidden representations. While DL can achieve high task performance, the training process of a DL model is both time- and resource-consuming. Therefore, current supply chains of the DL models assume the customers obtain pre-trained Deep Neural Networks (DNNs) from the third-party providers that have sufficient computing power. In the centralized setting, the model designer trains the DL model using the local dataset. However, the collected training data may contain erroneous or poisoned data points. The model designer might craft malicious training samples and inject a backdoor in the DL model distributed to the users. As a result, the user’s model will malfunction. In the federated learning setting, the cloud server aggregates local models trained on individual local datasets and updates the global model. In this scenario, the local client could poison the local training set and/or arbitrarily manipulate the local update. If the cloud server incorporates the malicious local gradients in model aggregation, the resulting global model will have degraded performance or backdoor behaviors. In this article, we present a comprehensive overview of contemporary data poisoning and model poisoning attacks against DL models in both centralized and federated learning scenarios. In addition, we review existing detection and defense techniques against various poisoning attacks.
ISSN:1539-9087
1558-3465
DOI:10.1145/3574159