Enhanced DNNs for malware classification with GAN-based adversarial training
Deep learning based malware classification gains momentum recently. However, deep learning models are vulnerable to adversarial perturbation attacks especially when applied in network security application. Deep neural network (DNN)-based malware classifiers by eating the whole bit sequences are also...
Gespeichert in:
Veröffentlicht in: | Journal of Computer Virology and Hacking Techniques 2021-06, Vol.17 (2), p.153-163 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 163 |
---|---|
container_issue | 2 |
container_start_page | 153 |
container_title | Journal of Computer Virology and Hacking Techniques |
container_volume | 17 |
creator | Zhang, Yunchun Li, Haorui Zheng, Yang Yao, Shaowen Jiang, Jiaqi |
description | Deep learning based malware classification gains momentum recently. However, deep learning models are vulnerable to adversarial perturbation attacks especially when applied in network security application. Deep neural network (DNN)-based malware classifiers by eating the whole bit sequences are also vulnerable despite their satisfactory performance and less feature-engineering job. Therefore, this paper proposes a DNN-based malware classifier on the raw bit sequences of programs in Windows. We then propose two adversarial attacks targeting our trained DNNs to generate adversarial malware. A defensive mechanism is proposed by treating perturbations as noise added on bit sequences. In our defensive mechanism, a generative adversary network (GAN)-based model is designed to filter out the perturbation noise and those that with the highest probability to fool the target DNNs are chosen for adversarial training. The experiments show that GAN with filter-based model produced the highest quality adversarial samples with medium cost. The evasion ratio under GAN with filter-based model is as high as 50.64% on average. While incorporating GAN-based adversarial samples into training, the enhanced DNN achieves satisfactory with 90.20% accuracy while the evasion ratio is below 9.47%. GAN helps in secure the DNN-based malware classifier with negligible performance degradation when compared with the original DNN. The evasion ratio is remarkably minimized when faced with powerful adversarial attacks, including
FGSM
r
and
FGSM
k
. |
doi_str_mv | 10.1007/s11416-021-00378-y |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_proquest_journals_2529973071</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2529973071</sourcerecordid><originalsourceid>FETCH-LOGICAL-c319t-fc68a7fb46e557ecd0ddc235df944532981db27799ee87ba87584ccf2fd499d23</originalsourceid><addsrcrecordid>eNp9kM1KAzEYRYMoWGpfwFXAdTQ_k0myLLW2QqkbXYdMftqU6UxNppa-vaMj6MrVdxfn3g8OALcE3xOMxUMmpCAlwpQgjJmQ6HwBRpSWDEnB2OWffA0mOe8wxoRyKUo-Aqt5szWN9Q4-rtcZhjbBvalPJnloa5NzDNGaLrYNPMVuCxfTNapM7nHjPnzKJkVTwy6Z2MRmcwOugqmzn_zcMXh7mr_Olmj1snieTVfIMqI6FGwpjQhVUXrOhbcOO2cp4y6oouCMKklcRYVQynspKiMFl4W1gQZXKOUoG4O7YfeQ2vejz53etcfU9C815VQpwbAgPUUHyqY25-SDPqS4N-msCdZf4vQgTvfi9Lc4fe5LbCjlHm42Pv1O_9P6BDEBcO4</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2529973071</pqid></control><display><type>article</type><title>Enhanced DNNs for malware classification with GAN-based adversarial training</title><source>Alma/SFX Local Collection</source><source>SpringerLink Journals - AutoHoldings</source><creator>Zhang, Yunchun ; Li, Haorui ; Zheng, Yang ; Yao, Shaowen ; Jiang, Jiaqi</creator><creatorcontrib>Zhang, Yunchun ; Li, Haorui ; Zheng, Yang ; Yao, Shaowen ; Jiang, Jiaqi</creatorcontrib><description>Deep learning based malware classification gains momentum recently. However, deep learning models are vulnerable to adversarial perturbation attacks especially when applied in network security application. Deep neural network (DNN)-based malware classifiers by eating the whole bit sequences are also vulnerable despite their satisfactory performance and less feature-engineering job. Therefore, this paper proposes a DNN-based malware classifier on the raw bit sequences of programs in Windows. We then propose two adversarial attacks targeting our trained DNNs to generate adversarial malware. A defensive mechanism is proposed by treating perturbations as noise added on bit sequences. In our defensive mechanism, a generative adversary network (GAN)-based model is designed to filter out the perturbation noise and those that with the highest probability to fool the target DNNs are chosen for adversarial training. The experiments show that GAN with filter-based model produced the highest quality adversarial samples with medium cost. The evasion ratio under GAN with filter-based model is as high as 50.64% on average. While incorporating GAN-based adversarial samples into training, the enhanced DNN achieves satisfactory with 90.20% accuracy while the evasion ratio is below 9.47%. GAN helps in secure the DNN-based malware classifier with negligible performance degradation when compared with the original DNN. The evasion ratio is remarkably minimized when faced with powerful adversarial attacks, including
FGSM
r
and
FGSM
k
.</description><identifier>ISSN: 2263-8733</identifier><identifier>EISSN: 2263-8733</identifier><identifier>DOI: 10.1007/s11416-021-00378-y</identifier><language>eng</language><publisher>Paris: Springer Paris</publisher><subject>Artificial neural networks ; Classification ; Classifiers ; Computer Science ; Deep learning ; Machine learning ; Malware ; Original Paper ; Performance degradation ; Perturbation ; Training</subject><ispartof>Journal of Computer Virology and Hacking Techniques, 2021-06, Vol.17 (2), p.153-163</ispartof><rights>The Author(s), under exclusive licence to Springer-Verlag France SAS, part of Springer Nature 2021</rights><rights>The Author(s), under exclusive licence to Springer-Verlag France SAS, part of Springer Nature 2021.</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c319t-fc68a7fb46e557ecd0ddc235df944532981db27799ee87ba87584ccf2fd499d23</citedby><cites>FETCH-LOGICAL-c319t-fc68a7fb46e557ecd0ddc235df944532981db27799ee87ba87584ccf2fd499d23</cites><orcidid>0000-0001-9738-4802</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktopdf>$$Uhttps://link.springer.com/content/pdf/10.1007/s11416-021-00378-y$$EPDF$$P50$$Gspringer$$H</linktopdf><linktohtml>$$Uhttps://link.springer.com/10.1007/s11416-021-00378-y$$EHTML$$P50$$Gspringer$$H</linktohtml><link.rule.ids>314,780,784,27924,27925,41488,42557,51319</link.rule.ids></links><search><creatorcontrib>Zhang, Yunchun</creatorcontrib><creatorcontrib>Li, Haorui</creatorcontrib><creatorcontrib>Zheng, Yang</creatorcontrib><creatorcontrib>Yao, Shaowen</creatorcontrib><creatorcontrib>Jiang, Jiaqi</creatorcontrib><title>Enhanced DNNs for malware classification with GAN-based adversarial training</title><title>Journal of Computer Virology and Hacking Techniques</title><addtitle>J Comput Virol Hack Tech</addtitle><description>Deep learning based malware classification gains momentum recently. However, deep learning models are vulnerable to adversarial perturbation attacks especially when applied in network security application. Deep neural network (DNN)-based malware classifiers by eating the whole bit sequences are also vulnerable despite their satisfactory performance and less feature-engineering job. Therefore, this paper proposes a DNN-based malware classifier on the raw bit sequences of programs in Windows. We then propose two adversarial attacks targeting our trained DNNs to generate adversarial malware. A defensive mechanism is proposed by treating perturbations as noise added on bit sequences. In our defensive mechanism, a generative adversary network (GAN)-based model is designed to filter out the perturbation noise and those that with the highest probability to fool the target DNNs are chosen for adversarial training. The experiments show that GAN with filter-based model produced the highest quality adversarial samples with medium cost. The evasion ratio under GAN with filter-based model is as high as 50.64% on average. While incorporating GAN-based adversarial samples into training, the enhanced DNN achieves satisfactory with 90.20% accuracy while the evasion ratio is below 9.47%. GAN helps in secure the DNN-based malware classifier with negligible performance degradation when compared with the original DNN. The evasion ratio is remarkably minimized when faced with powerful adversarial attacks, including
FGSM
r
and
FGSM
k
.</description><subject>Artificial neural networks</subject><subject>Classification</subject><subject>Classifiers</subject><subject>Computer Science</subject><subject>Deep learning</subject><subject>Machine learning</subject><subject>Malware</subject><subject>Original Paper</subject><subject>Performance degradation</subject><subject>Perturbation</subject><subject>Training</subject><issn>2263-8733</issn><issn>2263-8733</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kM1KAzEYRYMoWGpfwFXAdTQ_k0myLLW2QqkbXYdMftqU6UxNppa-vaMj6MrVdxfn3g8OALcE3xOMxUMmpCAlwpQgjJmQ6HwBRpSWDEnB2OWffA0mOe8wxoRyKUo-Aqt5szWN9Q4-rtcZhjbBvalPJnloa5NzDNGaLrYNPMVuCxfTNapM7nHjPnzKJkVTwy6Z2MRmcwOugqmzn_zcMXh7mr_Olmj1snieTVfIMqI6FGwpjQhVUXrOhbcOO2cp4y6oouCMKklcRYVQynspKiMFl4W1gQZXKOUoG4O7YfeQ2vejz53etcfU9C815VQpwbAgPUUHyqY25-SDPqS4N-msCdZf4vQgTvfi9Lc4fe5LbCjlHm42Pv1O_9P6BDEBcO4</recordid><startdate>20210601</startdate><enddate>20210601</enddate><creator>Zhang, Yunchun</creator><creator>Li, Haorui</creator><creator>Zheng, Yang</creator><creator>Yao, Shaowen</creator><creator>Jiang, Jiaqi</creator><general>Springer Paris</general><general>Springer Nature B.V</general><scope>AAYXX</scope><scope>CITATION</scope><orcidid>https://orcid.org/0000-0001-9738-4802</orcidid></search><sort><creationdate>20210601</creationdate><title>Enhanced DNNs for malware classification with GAN-based adversarial training</title><author>Zhang, Yunchun ; Li, Haorui ; Zheng, Yang ; Yao, Shaowen ; Jiang, Jiaqi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c319t-fc68a7fb46e557ecd0ddc235df944532981db27799ee87ba87584ccf2fd499d23</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Artificial neural networks</topic><topic>Classification</topic><topic>Classifiers</topic><topic>Computer Science</topic><topic>Deep learning</topic><topic>Machine learning</topic><topic>Malware</topic><topic>Original Paper</topic><topic>Performance degradation</topic><topic>Perturbation</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Zhang, Yunchun</creatorcontrib><creatorcontrib>Li, Haorui</creatorcontrib><creatorcontrib>Zheng, Yang</creatorcontrib><creatorcontrib>Yao, Shaowen</creatorcontrib><creatorcontrib>Jiang, Jiaqi</creatorcontrib><collection>CrossRef</collection><jtitle>Journal of Computer Virology and Hacking Techniques</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Zhang, Yunchun</au><au>Li, Haorui</au><au>Zheng, Yang</au><au>Yao, Shaowen</au><au>Jiang, Jiaqi</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Enhanced DNNs for malware classification with GAN-based adversarial training</atitle><jtitle>Journal of Computer Virology and Hacking Techniques</jtitle><stitle>J Comput Virol Hack Tech</stitle><date>2021-06-01</date><risdate>2021</risdate><volume>17</volume><issue>2</issue><spage>153</spage><epage>163</epage><pages>153-163</pages><issn>2263-8733</issn><eissn>2263-8733</eissn><abstract>Deep learning based malware classification gains momentum recently. However, deep learning models are vulnerable to adversarial perturbation attacks especially when applied in network security application. Deep neural network (DNN)-based malware classifiers by eating the whole bit sequences are also vulnerable despite their satisfactory performance and less feature-engineering job. Therefore, this paper proposes a DNN-based malware classifier on the raw bit sequences of programs in Windows. We then propose two adversarial attacks targeting our trained DNNs to generate adversarial malware. A defensive mechanism is proposed by treating perturbations as noise added on bit sequences. In our defensive mechanism, a generative adversary network (GAN)-based model is designed to filter out the perturbation noise and those that with the highest probability to fool the target DNNs are chosen for adversarial training. The experiments show that GAN with filter-based model produced the highest quality adversarial samples with medium cost. The evasion ratio under GAN with filter-based model is as high as 50.64% on average. While incorporating GAN-based adversarial samples into training, the enhanced DNN achieves satisfactory with 90.20% accuracy while the evasion ratio is below 9.47%. GAN helps in secure the DNN-based malware classifier with negligible performance degradation when compared with the original DNN. The evasion ratio is remarkably minimized when faced with powerful adversarial attacks, including
FGSM
r
and
FGSM
k
.</abstract><cop>Paris</cop><pub>Springer Paris</pub><doi>10.1007/s11416-021-00378-y</doi><tpages>11</tpages><orcidid>https://orcid.org/0000-0001-9738-4802</orcidid></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2263-8733 |
ispartof | Journal of Computer Virology and Hacking Techniques, 2021-06, Vol.17 (2), p.153-163 |
issn | 2263-8733 2263-8733 |
language | eng |
recordid | cdi_proquest_journals_2529973071 |
source | Alma/SFX Local Collection; SpringerLink Journals - AutoHoldings |
subjects | Artificial neural networks Classification Classifiers Computer Science Deep learning Machine learning Malware Original Paper Performance degradation Perturbation Training |
title | Enhanced DNNs for malware classification with GAN-based adversarial training |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T16%3A05%3A02IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Enhanced%20DNNs%20for%20malware%20classification%20with%20GAN-based%20adversarial%20training&rft.jtitle=Journal%20of%20Computer%20Virology%20and%20Hacking%20Techniques&rft.au=Zhang,%20Yunchun&rft.date=2021-06-01&rft.volume=17&rft.issue=2&rft.spage=153&rft.epage=163&rft.pages=153-163&rft.issn=2263-8733&rft.eissn=2263-8733&rft_id=info:doi/10.1007/s11416-021-00378-y&rft_dat=%3Cproquest_cross%3E2529973071%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2529973071&rft_id=info:pmid/&rfr_iscdi=true |