Increasing robustness of deep neural network models against adversarial attacks
In Autonomous driving detecting correct object is important, further studies proved that by adding small pattern above object can also lead intentional fooling of network. Small intentional changes in the input can significantly distort output of a deep neural network model. This makes the machine l...
Gespeichert in:
Veröffentlicht in: | Journal of physics. Conference series 2021-02, Vol.1797 (1), p.12005 |
---|---|
Hauptverfasser: | , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In Autonomous driving detecting correct object is important, further studies proved that by adding small pattern above object can also lead intentional fooling of network. Small intentional changes in the input can significantly distort output of a deep neural network model. This makes the machine learned model vulnerable to these small changes in images. Hence, these models have wide scope of failure. If we are able to tackle these intentional attacks it will help to make system more robust.
In this project, we have combined multiple techniques used for defending against adversarial attacks. First technique is Adversarial training which include modifying training dataset, second technique is pre-processing input data before applying it to deep learned model and Third technique randomly selects image pre-processing technique. Third method is aimed to distract attacker who know the method used in pre-processing by randomly selecting from multiple methods in image pre-processing.
We will measure robustness of deep learned model in terms of Accuracy on Designed system and previous deep learned model. Test samples and adversarial images generated from dataset will be used for testing on deep learned model.
Among all methods which we have combined, Adversarial training proved best method to defend against white box attack. If we would have used strong defences in Random selection of image transformation then the system could have performed much better. However, Random selection have done its work of confusing the attacker by selecting random transformations. |
---|---|
ISSN: | 1742-6588 1742-6596 |
DOI: | 10.1088/1742-6596/1797/1/012005 |