Advances in adversarial attacks and defenses in computer vision: A survey

Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Hauptverfasser: Akhtar, Naveed, Mian, Ajmal, Kardan, Navid, Shah, Mubarak
Format: Artikel
Sprache:eng
Schlagworte:
Online-Zugang:Volltext bestellen
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
container_end_page
container_issue
container_start_page
container_title
container_volume
creator Akhtar, Naveed
Mian, Ajmal
Kardan, Navid
Shah, Mubarak
description Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].
doi_str_mv 10.48550/arxiv.2108.00401
format Article
fullrecord <record><control><sourceid>arxiv_GOX</sourceid><recordid>TN_cdi_arxiv_primary_2108_00401</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2108_00401</sourcerecordid><originalsourceid>FETCH-LOGICAL-a671-3abcd53c33dda2ca738d47fb0ecc90127cc7e2ee6392f96a76bcace20dd2e81a3</originalsourceid><addsrcrecordid>eNotz71uwjAUhmEvDBX0AjrVN5DUP0mcdItQS5GQurBHJ-ecSFbBIDtY5e4roNO3vPqkR4gXrcqqrWv1BvHX59Jo1ZZKVUo_iW1PGQJykj5IoMwxQfRwkDDPgD9JQiBJPHFIjwZPx_Nl5iizT_4U3mUv0yVmvq7EYoJD4uf_XYr958d-_VXsvjfbdb8roHG6sDAi1RatJQKD4GxLlZtGxYid0sYhOjbMje3M1DXgmhEB2Sgiw60GuxSvj9u7ZThHf4R4HW6m4W6yf1EUSFk</addsrcrecordid><sourcetype>Open Access Repository</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Advances in adversarial attacks and defenses in computer vision: A survey</title><source>arXiv.org</source><creator>Akhtar, Naveed ; Mian, Ajmal ; Kardan, Navid ; Shah, Mubarak</creator><creatorcontrib>Akhtar, Naveed ; Mian, Ajmal ; Kardan, Navid ; Shah, Mubarak</creatorcontrib><description>Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].</description><identifier>DOI: 10.48550/arxiv.2108.00401</identifier><language>eng</language><subject>Computer Science - Computer Vision and Pattern Recognition ; Computer Science - Computers and Society ; Computer Science - Cryptography and Security ; Computer Science - Learning</subject><creationdate>2021-08</creationdate><rights>http://arxiv.org/licenses/nonexclusive-distrib/1.0</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>228,230,780,885</link.rule.ids><linktorsrc>$$Uhttps://arxiv.org/abs/2108.00401$$EView_record_in_Cornell_University$$FView_record_in_$$GCornell_University$$Hfree_for_read</linktorsrc><backlink>$$Uhttps://doi.org/10.48550/arXiv.2108.00401$$DView paper in arXiv$$Hfree_for_read</backlink></links><search><creatorcontrib>Akhtar, Naveed</creatorcontrib><creatorcontrib>Mian, Ajmal</creatorcontrib><creatorcontrib>Kardan, Navid</creatorcontrib><creatorcontrib>Shah, Mubarak</creatorcontrib><title>Advances in adversarial attacks and defenses in computer vision: A survey</title><description>Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].</description><subject>Computer Science - Computer Vision and Pattern Recognition</subject><subject>Computer Science - Computers and Society</subject><subject>Computer Science - Cryptography and Security</subject><subject>Computer Science - Learning</subject><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><sourceid>GOX</sourceid><recordid>eNotz71uwjAUhmEvDBX0AjrVN5DUP0mcdItQS5GQurBHJ-ecSFbBIDtY5e4roNO3vPqkR4gXrcqqrWv1BvHX59Jo1ZZKVUo_iW1PGQJykj5IoMwxQfRwkDDPgD9JQiBJPHFIjwZPx_Nl5iizT_4U3mUv0yVmvq7EYoJD4uf_XYr958d-_VXsvjfbdb8roHG6sDAi1RatJQKD4GxLlZtGxYid0sYhOjbMje3M1DXgmhEB2Sgiw60GuxSvj9u7ZThHf4R4HW6m4W6yf1EUSFk</recordid><startdate>20210801</startdate><enddate>20210801</enddate><creator>Akhtar, Naveed</creator><creator>Mian, Ajmal</creator><creator>Kardan, Navid</creator><creator>Shah, Mubarak</creator><scope>AKY</scope><scope>GOX</scope></search><sort><creationdate>20210801</creationdate><title>Advances in adversarial attacks and defenses in computer vision: A survey</title><author>Akhtar, Naveed ; Mian, Ajmal ; Kardan, Navid ; Shah, Mubarak</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a671-3abcd53c33dda2ca738d47fb0ecc90127cc7e2ee6392f96a76bcace20dd2e81a3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Computer Science - Computer Vision and Pattern Recognition</topic><topic>Computer Science - Computers and Society</topic><topic>Computer Science - Cryptography and Security</topic><topic>Computer Science - Learning</topic><toplevel>online_resources</toplevel><creatorcontrib>Akhtar, Naveed</creatorcontrib><creatorcontrib>Mian, Ajmal</creatorcontrib><creatorcontrib>Kardan, Navid</creatorcontrib><creatorcontrib>Shah, Mubarak</creatorcontrib><collection>arXiv Computer Science</collection><collection>arXiv.org</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Akhtar, Naveed</au><au>Mian, Ajmal</au><au>Kardan, Navid</au><au>Shah, Mubarak</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Advances in adversarial attacks and defenses in computer vision: A survey</atitle><date>2021-08-01</date><risdate>2021</risdate><abstract>Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL is vulnerable to adversarial attacks that can manipulate its predictions by introducing visually imperceptible perturbations in images and videos. Since the discovery of this phenomenon in 2013~[1], it has attracted significant attention of researchers from multiple sub-fields of machine intelligence. In [2], we reviewed the contributions made by the computer vision community in adversarial attacks on deep learning (and their defenses) until the advent of year 2018. Many of those contributions have inspired new directions in this area, which has matured significantly since witnessing the first generation methods. Hence, as a legacy sequel of [2], this literature review focuses on the advances in this area since 2018. To ensure authenticity, we mainly consider peer-reviewed contributions published in the prestigious sources of computer vision and machine learning research. Besides a comprehensive literature review, the article also provides concise definitions of technical terminologies for non-experts in this domain. Finally, this article discusses challenges and future outlook of this direction based on the literature reviewed herein and [2].</abstract><doi>10.48550/arxiv.2108.00401</doi><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier DOI: 10.48550/arxiv.2108.00401
ispartof
issn
language eng
recordid cdi_arxiv_primary_2108_00401
source arXiv.org
subjects Computer Science - Computer Vision and Pattern Recognition
Computer Science - Computers and Society
Computer Science - Cryptography and Security
Computer Science - Learning
title Advances in adversarial attacks and defenses in computer vision: A survey
url https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T08%3A49%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-arxiv_GOX&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Advances%20in%20adversarial%20attacks%20and%20defenses%20in%20computer%20vision:%20A%20survey&rft.au=Akhtar,%20Naveed&rft.date=2021-08-01&rft_id=info:doi/10.48550/arxiv.2108.00401&rft_dat=%3Carxiv_GOX%3E2108_00401%3C/arxiv_GOX%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true