Towards Security Threats of Deep Learning Systems: A Survey

Deep learning has gained tremendous success and great popularity in the past few years. However, deep learning systems are suffering several inherent weaknesses, which can threaten the security of learning models. Deep learning's wide use further magnifies the impact and consequences. To this e...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE transactions on software engineering 2022-05, Vol.48 (5), p.1743-1770
Hauptverfasser: He, Yingzhe, Meng, Guozhu, Chen, Kai, Hu, Xingbo, He, Jinwen
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Deep learning has gained tremendous success and great popularity in the past few years. However, deep learning systems are suffering several inherent weaknesses, which can threaten the security of learning models. Deep learning's wide use further magnifies the impact and consequences. To this end, lots of research has been conducted with the purpose of exhaustively identifying intrinsic weaknesses and subsequently proposing feasible mitigation. Yet few are clear about how these weaknesses are incurred and how effective these attack approaches are in assaulting deep learning. In order to unveil the security weaknesses and aid in the development of a robust deep learning system, we undertake an investigation on attacks towards deep learning, and analyze these attacks to conclude some findings in multiple views. In particular, we focus on four types of attacks associated with security threats of deep learning: model extraction attack, model inversion attack, poisoning attack and adversarial attack. For each type of attack, we construct its essential workflow as well as adversary capabilities and attack goals. Pivot metrics are devised for comparing the attack approaches, by which we perform quantitative and qualitative analyses. From the analysis, we have identified significant and indispensable factors in an attack vector, e.g., how to reduce queries to target models, what distance should be used for measuring perturbation. We shed light on 18 findings covering these approaches' merits and demerits, success probability, deployment complexity and prospects. Moreover, we discuss other potential security weaknesses and possible mitigation which can inspire relevant research in this area.
ISSN:0098-5589
1939-3520
DOI:10.1109/TSE.2020.3034721