Developing and Defeating Adversarial Examples
Breakthroughs in machine learning have resulted in state-of-the-art deep neural networks (DNNs) performing classification tasks in safety-critical applications. Recent research has demonstrated that DNNs can be attacked through adversarial examples, which are small perturbations to input data that c...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Breakthroughs in machine learning have resulted in state-of-the-art deep
neural networks (DNNs) performing classification tasks in safety-critical
applications. Recent research has demonstrated that DNNs can be attacked
through adversarial examples, which are small perturbations to input data that
cause the DNN to misclassify objects. The proliferation of DNNs raises
important safety concerns about designing systems that are robust to
adversarial examples. In this work we develop adversarial examples to attack
the Yolo V3 object detector [1] and then study strategies to detect and
neutralize these examples. Python code for this project is available at
https://github.com/ianmcdiarmidsterling/adversarial |
---|---|
DOI: | 10.48550/arxiv.2008.10106 |