A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning
Deep Learning (DL) techniques are being used in various critical applications like self-driving cars. DL techniques such as Deep Neural Networks (DNN), Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) are prone to adversarial attacks, which can make the DL techn...
Gespeichert in:
Veröffentlicht in: | IEEE access 2023, Vol.11, p.120095-120130 |
---|---|
Hauptverfasser: | , , , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 120130 |
---|---|
container_issue | |
container_start_page | 120095 |
container_title | IEEE access |
container_volume | 11 |
creator | Ali, Haider Chen, Dian Harrington, Matthew Salazar, Nathaniel Ameedi, Mohannad Al Khan, Ahmad Faraz Butt, Ali R. Cho, Jin-Hee |
description | Deep Learning (DL) techniques are being used in various critical applications like self-driving cars. DL techniques such as Deep Neural Networks (DNN), Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) are prone to adversarial attacks, which can make the DL techniques perform poorly. Developing such attacks and their countermeasures is the prerequisite for making artificial intelligence techniques robust, secure, and deployable. Previous survey papers only focused on one or two techniques and are outdated. They do not discuss application domains, datasets, and testbeds in detail. There is also a need to discuss the commonalities and differences among DL techniques. In this paper, we comprehensively discussed the attacks and defenses in four popular DL models, including DNN, DRL, FL, and TL. We also highlighted the application domains, datasets, metrics, and testbeds in these fields. One of our key contributions is to discuss the commonalities and differences among these DL techniques. Insights, lessons, and future research directions are also highlighted in detail. |
doi_str_mv | 10.1109/ACCESS.2023.3326410 |
format | Article |
fullrecord | <record><control><sourceid>proquest_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1109_ACCESS_2023_3326410</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10288459</ieee_id><doaj_id>oai_doaj_org_article_7edc1c85f14a42209fd09d65cc0749a6</doaj_id><sourcerecordid>2885652110</sourcerecordid><originalsourceid>FETCH-LOGICAL-c409t-2e30722cd86b859e3cbbc36e0caca3d4c51256dca6c22cd4800d6667e0c2bb263</originalsourceid><addsrcrecordid>eNpNkcFu1DAQhiMEElXpE8DBEtfdxbFjJ-EWhRYqrYrU3Z4tx54Ub3ftMHZAfRWeluymauvLWDP__81If5Z9zOkqz2n9pWnby81mxSjjK86ZLHL6JjtjuayXXHD59tX_fXYR445Or5paojzL_jVkM-IfeCTBkyYlbR4i0d6S7S9wSNow-gR4AB1HhEicJ98ABrIGjd75-6-kGYa9Mzq54F_GNzCi3k8l_Q34EBfkCiygTmAXZIvaxx5wcVpzkt-C831AAwfw6Zn9IXvX632Ei6d6nt1dXW7bH8v1z-_XbbNemoLWacmA05IxYyvZVaIGbrrOcAnUaKO5LYzImZDWaGmOqqKi1Eopy0nAuo5Jfp5dz1wb9E4N6A4aH1XQTp0aAe-VxuTMHlQJ1uSmEn1e6IIxWveW1lYKY2hZ1PrI-jyzBgy_R4hJ7cKIfjpfsaoSUrApsknFZ5XBECNC_7w1p-qYqZozVcdM1VOmk-vT7HIA8MoxgQtR8_81K55_</addsrcrecordid><sourcetype>Open Website</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2885652110</pqid></control><display><type>article</type><title>A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning</title><source>Electronic Journals Library</source><source>DOAJ Directory of Open Access Journals</source><source>IEEE Xplore Open Access Journals</source><creator>Ali, Haider ; Chen, Dian ; Harrington, Matthew ; Salazar, Nathaniel ; Ameedi, Mohannad Al ; Khan, Ahmad Faraz ; Butt, Ali R. ; Cho, Jin-Hee</creator><creatorcontrib>Ali, Haider ; Chen, Dian ; Harrington, Matthew ; Salazar, Nathaniel ; Ameedi, Mohannad Al ; Khan, Ahmad Faraz ; Butt, Ali R. ; Cho, Jin-Hee</creatorcontrib><description>Deep Learning (DL) techniques are being used in various critical applications like self-driving cars. DL techniques such as Deep Neural Networks (DNN), Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) are prone to adversarial attacks, which can make the DL techniques perform poorly. Developing such attacks and their countermeasures is the prerequisite for making artificial intelligence techniques robust, secure, and deployable. Previous survey papers only focused on one or two techniques and are outdated. They do not discuss application domains, datasets, and testbeds in detail. There is also a need to discuss the commonalities and differences among DL techniques. In this paper, we comprehensively discussed the attacks and defenses in four popular DL models, including DNN, DRL, FL, and TL. We also highlighted the application domains, datasets, metrics, and testbeds in these fields. One of our key contributions is to discuss the commonalities and differences among these DL techniques. Insights, lessons, and future research directions are also highlighted in detail.</description><identifier>ISSN: 2169-3536</identifier><identifier>EISSN: 2169-3536</identifier><identifier>DOI: 10.1109/ACCESS.2023.3326410</identifier><identifier>CODEN: IAECCG</identifier><language>eng</language><publisher>Piscataway: IEEE</publisher><subject>Artificial intelligence ; Artificial neural networks ; Attacks ; Autonomous cars ; Datasets ; Deep learning ; deep neural networks ; deep reinforcement learning ; defenses ; Federated learning ; Machine learning ; Measurement ; Neural networks ; Reinforcement learning ; Security ; Surveys ; Test stands ; Transfer learning</subject><ispartof>IEEE access, 2023, Vol.11, p.120095-120130</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c409t-2e30722cd86b859e3cbbc36e0caca3d4c51256dca6c22cd4800d6667e0c2bb263</citedby><cites>FETCH-LOGICAL-c409t-2e30722cd86b859e3cbbc36e0caca3d4c51256dca6c22cd4800d6667e0c2bb263</cites><orcidid>0000-0002-5908-4662 ; 0009-0008-0851-0394 ; 0009-0000-9313-4910 ; 0009-0000-7641-454X</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10288459$$EHTML$$P50$$Gieee$$Hfree_for_read</linktohtml><link.rule.ids>314,776,780,860,2096,4010,27610,27900,27901,27902,54908</link.rule.ids></links><search><creatorcontrib>Ali, Haider</creatorcontrib><creatorcontrib>Chen, Dian</creatorcontrib><creatorcontrib>Harrington, Matthew</creatorcontrib><creatorcontrib>Salazar, Nathaniel</creatorcontrib><creatorcontrib>Ameedi, Mohannad Al</creatorcontrib><creatorcontrib>Khan, Ahmad Faraz</creatorcontrib><creatorcontrib>Butt, Ali R.</creatorcontrib><creatorcontrib>Cho, Jin-Hee</creatorcontrib><title>A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning</title><title>IEEE access</title><addtitle>Access</addtitle><description>Deep Learning (DL) techniques are being used in various critical applications like self-driving cars. DL techniques such as Deep Neural Networks (DNN), Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) are prone to adversarial attacks, which can make the DL techniques perform poorly. Developing such attacks and their countermeasures is the prerequisite for making artificial intelligence techniques robust, secure, and deployable. Previous survey papers only focused on one or two techniques and are outdated. They do not discuss application domains, datasets, and testbeds in detail. There is also a need to discuss the commonalities and differences among DL techniques. In this paper, we comprehensively discussed the attacks and defenses in four popular DL models, including DNN, DRL, FL, and TL. We also highlighted the application domains, datasets, metrics, and testbeds in these fields. One of our key contributions is to discuss the commonalities and differences among these DL techniques. Insights, lessons, and future research directions are also highlighted in detail.</description><subject>Artificial intelligence</subject><subject>Artificial neural networks</subject><subject>Attacks</subject><subject>Autonomous cars</subject><subject>Datasets</subject><subject>Deep learning</subject><subject>deep neural networks</subject><subject>deep reinforcement learning</subject><subject>defenses</subject><subject>Federated learning</subject><subject>Machine learning</subject><subject>Measurement</subject><subject>Neural networks</subject><subject>Reinforcement learning</subject><subject>Security</subject><subject>Surveys</subject><subject>Test stands</subject><subject>Transfer learning</subject><issn>2169-3536</issn><issn>2169-3536</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><sourceid>ESBDL</sourceid><sourceid>RIE</sourceid><sourceid>DOA</sourceid><recordid>eNpNkcFu1DAQhiMEElXpE8DBEtfdxbFjJ-EWhRYqrYrU3Z4tx54Ub3ftMHZAfRWeluymauvLWDP__81If5Z9zOkqz2n9pWnby81mxSjjK86ZLHL6JjtjuayXXHD59tX_fXYR445Or5paojzL_jVkM-IfeCTBkyYlbR4i0d6S7S9wSNow-gR4AB1HhEicJ98ABrIGjd75-6-kGYa9Mzq54F_GNzCi3k8l_Q34EBfkCiygTmAXZIvaxx5wcVpzkt-C831AAwfw6Zn9IXvX632Ei6d6nt1dXW7bH8v1z-_XbbNemoLWacmA05IxYyvZVaIGbrrOcAnUaKO5LYzImZDWaGmOqqKi1Eopy0nAuo5Jfp5dz1wb9E4N6A4aH1XQTp0aAe-VxuTMHlQJ1uSmEn1e6IIxWveW1lYKY2hZ1PrI-jyzBgy_R4hJ7cKIfjpfsaoSUrApsknFZ5XBECNC_7w1p-qYqZozVcdM1VOmk-vT7HIA8MoxgQtR8_81K55_</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>Ali, Haider</creator><creator>Chen, Dian</creator><creator>Harrington, Matthew</creator><creator>Salazar, Nathaniel</creator><creator>Ameedi, Mohannad Al</creator><creator>Khan, Ahmad Faraz</creator><creator>Butt, Ali R.</creator><creator>Cho, Jin-Hee</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>ESBDL</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SC</scope><scope>7SP</scope><scope>7SR</scope><scope>8BQ</scope><scope>8FD</scope><scope>JG9</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope><scope>DOA</scope><orcidid>https://orcid.org/0000-0002-5908-4662</orcidid><orcidid>https://orcid.org/0009-0008-0851-0394</orcidid><orcidid>https://orcid.org/0009-0000-9313-4910</orcidid><orcidid>https://orcid.org/0009-0000-7641-454X</orcidid></search><sort><creationdate>2023</creationdate><title>A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning</title><author>Ali, Haider ; Chen, Dian ; Harrington, Matthew ; Salazar, Nathaniel ; Ameedi, Mohannad Al ; Khan, Ahmad Faraz ; Butt, Ali R. ; Cho, Jin-Hee</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c409t-2e30722cd86b859e3cbbc36e0caca3d4c51256dca6c22cd4800d6667e0c2bb263</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Artificial intelligence</topic><topic>Artificial neural networks</topic><topic>Attacks</topic><topic>Autonomous cars</topic><topic>Datasets</topic><topic>Deep learning</topic><topic>deep neural networks</topic><topic>deep reinforcement learning</topic><topic>defenses</topic><topic>Federated learning</topic><topic>Machine learning</topic><topic>Measurement</topic><topic>Neural networks</topic><topic>Reinforcement learning</topic><topic>Security</topic><topic>Surveys</topic><topic>Test stands</topic><topic>Transfer learning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Ali, Haider</creatorcontrib><creatorcontrib>Chen, Dian</creatorcontrib><creatorcontrib>Harrington, Matthew</creatorcontrib><creatorcontrib>Salazar, Nathaniel</creatorcontrib><creatorcontrib>Ameedi, Mohannad Al</creatorcontrib><creatorcontrib>Khan, Ahmad Faraz</creatorcontrib><creatorcontrib>Butt, Ali R.</creatorcontrib><creatorcontrib>Cho, Jin-Hee</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005–Present</collection><collection>IEEE Xplore Open Access Journals</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998–Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Computer and Information Systems Abstracts</collection><collection>Electronics & Communications Abstracts</collection><collection>Engineered Materials Abstracts</collection><collection>METADEX</collection><collection>Technology Research Database</collection><collection>Materials Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><collection>DOAJ Directory of Open Access Journals</collection><jtitle>IEEE access</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Ali, Haider</au><au>Chen, Dian</au><au>Harrington, Matthew</au><au>Salazar, Nathaniel</au><au>Ameedi, Mohannad Al</au><au>Khan, Ahmad Faraz</au><au>Butt, Ali R.</au><au>Cho, Jin-Hee</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning</atitle><jtitle>IEEE access</jtitle><stitle>Access</stitle><date>2023</date><risdate>2023</risdate><volume>11</volume><spage>120095</spage><epage>120130</epage><pages>120095-120130</pages><issn>2169-3536</issn><eissn>2169-3536</eissn><coden>IAECCG</coden><abstract>Deep Learning (DL) techniques are being used in various critical applications like self-driving cars. DL techniques such as Deep Neural Networks (DNN), Deep Reinforcement Learning (DRL), Federated Learning (FL), and Transfer Learning (TL) are prone to adversarial attacks, which can make the DL techniques perform poorly. Developing such attacks and their countermeasures is the prerequisite for making artificial intelligence techniques robust, secure, and deployable. Previous survey papers only focused on one or two techniques and are outdated. They do not discuss application domains, datasets, and testbeds in detail. There is also a need to discuss the commonalities and differences among DL techniques. In this paper, we comprehensively discussed the attacks and defenses in four popular DL models, including DNN, DRL, FL, and TL. We also highlighted the application domains, datasets, metrics, and testbeds in these fields. One of our key contributions is to discuss the commonalities and differences among these DL techniques. Insights, lessons, and future research directions are also highlighted in detail.</abstract><cop>Piscataway</cop><pub>IEEE</pub><doi>10.1109/ACCESS.2023.3326410</doi><tpages>36</tpages><orcidid>https://orcid.org/0000-0002-5908-4662</orcidid><orcidid>https://orcid.org/0009-0008-0851-0394</orcidid><orcidid>https://orcid.org/0009-0000-9313-4910</orcidid><orcidid>https://orcid.org/0009-0000-7641-454X</orcidid><oa>free_for_read</oa></addata></record> |
fulltext | fulltext |
identifier | ISSN: 2169-3536 |
ispartof | IEEE access, 2023, Vol.11, p.120095-120130 |
issn | 2169-3536 2169-3536 |
language | eng |
recordid | cdi_crossref_primary_10_1109_ACCESS_2023_3326410 |
source | Electronic Journals Library; DOAJ Directory of Open Access Journals; IEEE Xplore Open Access Journals |
subjects | Artificial intelligence Artificial neural networks Attacks Autonomous cars Datasets Deep learning deep neural networks deep reinforcement learning defenses Federated learning Machine learning Measurement Neural networks Reinforcement learning Security Surveys Test stands Transfer learning |
title | A Survey on Attacks and Their Countermeasures in Deep Learning: Applications in Deep Neural Networks, Federated, Transfer, and Deep Reinforcement Learning |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-29T11%3A19%3A37IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=A%20Survey%20on%20Attacks%20and%20Their%20Countermeasures%20in%20Deep%20Learning:%20Applications%20in%20Deep%20Neural%20Networks,%20Federated,%20Transfer,%20and%20Deep%20Reinforcement%20Learning&rft.jtitle=IEEE%20access&rft.au=Ali,%20Haider&rft.date=2023&rft.volume=11&rft.spage=120095&rft.epage=120130&rft.pages=120095-120130&rft.issn=2169-3536&rft.eissn=2169-3536&rft.coden=IAECCG&rft_id=info:doi/10.1109/ACCESS.2023.3326410&rft_dat=%3Cproquest_cross%3E2885652110%3C/proquest_cross%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2885652110&rft_id=info:pmid/&rft_ieee_id=10288459&rft_doaj_id=oai_doaj_org_article_7edc1c85f14a42209fd09d65cc0749a6&rfr_iscdi=true |