Towards Adversarially Superior Malware Detection Models: An Adversary Aware Proactive Approach using Adversarial Attacks and Defenses
The android ecosystem (smartphones, tablets, etc.) has grown manifold in the last decade. However, the exponential surge of android malware is threatening the ecosystem. Literature suggests that android malware can be detected using machine and deep learning classifiers; however, these detection mod...
Gespeichert in:
Veröffentlicht in: | Information systems frontiers 2023-04, Vol.25 (2), p.567-587 |
---|---|
Hauptverfasser: | , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The android ecosystem (smartphones, tablets, etc.) has grown manifold in the last decade. However, the exponential surge of android malware is threatening the ecosystem. Literature suggests that android malware can be detected using machine and deep learning classifiers; however, these detection models might be vulnerable to adversarial attacks. This work investigates the adversarial robustness of twenty-four diverse malware detection models developed using two features and twelve learning algorithms across four categories (machine learning, bagging classifiers, boosting classifiers, and neural network). We stepped into the adversary’s shoes and proposed two false-negative evasion attacks, namely
GradAA
and
GreedAA
, to expose vulnerabilities in the above detection models. The evasion attack agents transform malware applications into adversarial malware applications by adding minimum noise (maximum five perturbations) while maintaining the modified applications’ structural, syntactic, and behavioral integrity. These adversarial malware applications force misclassifications and are predicted as benign by the detection models. The evasion attacks achieved an average fooling rate of 83.34
%
(GradAA) and 99.21
%
(GreedAA) which reduced the average accuracy from 90.35
%
to 55.22
%
(GradAA) and 48.29
%
(GreedAA) in twenty-four detection models. We also proposed two defense strategies, namely
Adversarial Retraining
and
Correlation Distillation Retraining
as countermeasures to protect detection models from adversarial attacks. The defense strategies slightly improved the detection accuracy but drastically enhanced the adversarial robustness of detection models. Finally, investigating the robustness of malware detection models against adversarial attacks is an essential step before their real-world deployment and can help in developing adversarially superior detection models. |
---|---|
ISSN: | 1387-3326 1572-9419 |
DOI: | 10.1007/s10796-022-10331-z |