Stealthy Energy Consumption-oriented Attacks on Training Stage in Deep Learning
Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus o...
Gespeichert in:
Veröffentlicht in: | Journal of signal processing systems 2023-12, Vol.95 (12), p.1425-1437 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Learning as a Service (DLaaS) is rapidly developing recently to enable applications including self-driving, face recognition, and natural language processing for small enterprises. However, DLaaS can also introduce enormous computing power consumption at the service ends. Existing works focus on the optimization of the training process such as using low-cost chips or optimizing the training settings for better energy efficiency. In this paper, we revisit this issue from an adversary perspective which attempts to maliciously make victims waste more training efforts without being noticed. In particular, we propose a novel attack targeting enlarging the training costs stealthily via poisoning the training data. By adopting the Projected Gradient Descent (PGD) method to generate poisoned samples, we show that attackers can significantly increase the training costs by as much as 88% in both the white-box scenario and the black-box scenario with a very tiny influence on the model’s accuracy. |
---|---|
ISSN: | 1939-8018 1939-8115 |
DOI: | 10.1007/s11265-023-01895-3 |