Advances in adversarial attacks and defenses in computer vision: A survey
Deep Learning (DL) is the most widely used tool in the contemporary field of computer vision. Its ability to accurately solve complex problems is employed in vision research to learn deep neural models for a variety of tasks, including security critical applications. However, it is now known that DL...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep Learning (DL) is the most widely used tool in the contemporary field of
computer vision. Its ability to accurately solve complex problems is employed
in vision research to learn deep neural models for a variety of tasks,
including security critical applications. However, it is now known that DL is
vulnerable to adversarial attacks that can manipulate its predictions by
introducing visually imperceptible perturbations in images and videos. Since
the discovery of this phenomenon in 2013~[1], it has attracted significant
attention of researchers from multiple sub-fields of machine intelligence. In
[2], we reviewed the contributions made by the computer vision community in
adversarial attacks on deep learning (and their defenses) until the advent of
year 2018. Many of those contributions have inspired new directions in this
area, which has matured significantly since witnessing the first generation
methods. Hence, as a legacy sequel of [2], this literature review focuses on
the advances in this area since 2018. To ensure authenticity, we mainly
consider peer-reviewed contributions published in the prestigious sources of
computer vision and machine learning research. Besides a comprehensive
literature review, the article also provides concise definitions of technical
terminologies for non-experts in this domain. Finally, this article discusses
challenges and future outlook of this direction based on the literature
reviewed herein and [2]. |
---|---|
DOI: | 10.48550/arxiv.2108.00401 |