Mind control attack: Undermining deep learning with GPU memory exploitation

Modern deep learning frameworks rely heavily on GPUs to accelerate the computation. However, the security implication of GPU device memory exploitation on deep learning frameworks has been largely neglected. In this paper, we argue that GPU device memory manipulation is a novel attack vector against...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:Computers & security 2021-03, Vol.102, p.102115, Article 102115
Hauptverfasser: Park, Sang-Ok, Kwon, Ohmin, Kim, Yonggon, Cha, Sang Kil, Yoon, Hyunsoo
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Modern deep learning frameworks rely heavily on GPUs to accelerate the computation. However, the security implication of GPU device memory exploitation on deep learning frameworks has been largely neglected. In this paper, we argue that GPU device memory manipulation is a novel attack vector against deep learning systems. We present a novel attack method leveraging the attack vector, which makes deep learning predictions no longer different from random guessing by degrading the accuracy of the predictions. To the best of our knowledge, we are the first to show a practical attack that directly exploits deep learning frameworks through GPU memory manipulation. We confirmed that our attack works on three popular deep learning frameworks, TensorFlow, CNTK, and Caffe, running on CUDA. Finally, we propose potential defense mechanisms against our attack, and discuss concerns of GPU memory safety.
ISSN:0167-4048
1872-6208
DOI:10.1016/j.cose.2020.102115