Robust Pervasive Detection for Adversarial Samples of Artificial Intelligence in IoT Environments

Nowadays, artificial intelligence technologies (e.g., deep neural networks) have been used widely in the Internet of Things (IoT) to provide smart services and sensing data processing. The evolving neural network even exceeds the human cognitive level. However, the accuracy of these structures depen...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:IEEE access 2019, Vol.7, p.88693-88704
Hauptverfasser: Wang, Shen, Qiao, Zhuobiao
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Nowadays, artificial intelligence technologies (e.g., deep neural networks) have been used widely in the Internet of Things (IoT) to provide smart services and sensing data processing. The evolving neural network even exceeds the human cognitive level. However, the accuracy of these structures depends to some extent on the accuracy of the training data. Some well-designed generated antagonistic disturbances are sufficient to deceive model when added to images. Such attacks cause the classifiers trained by the neural network to misidentify the object and thus completely fail. On the other hand, the various existing defensive methods that have been proposed suffer from two criticisms. The first thing that bears the brunt is unsatisfactory detection rate due to low robustness toward the adversarial sample. Second, the excessive dependence on the output of specific network structure layers hinders the emergence of universal schemes. In this paper, we propose the large margin cosine estimation (LMCE) detection scheme to overcome the above shortcomings, making the detection independent and universal. We illustrate the principle of our approach and demonstrate the significance and analysis of some important parameters. Moreover, we model various types of adversarial attacks and establish proposed defense mechanisms against them and evaluate our approach from different aspects. This method has been clearly validated on a range of standard datasets including MNIST, CIFAR-10, and SVHN. The assessment strongly reflects the robustness and pervasive of this approach in the face of various white and semi-white box attacks.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2019.2919695