CausAdv: A Causal-based Framework for Detecting Adversarial Examples
Deep learning has led to tremendous success in many real-world applications of computer vision, thanks to sophisticated architectures such as Convolutional neural networks (CNNs). However, CNNs have been shown to be vulnerable to crafted adversarial perturbations in inputs. These inputs appear almos...
Gespeichert in:
1. Verfasser: | |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Deep learning has led to tremendous success in many real-world applications
of computer vision, thanks to sophisticated architectures such as Convolutional
neural networks (CNNs). However, CNNs have been shown to be vulnerable to
crafted adversarial perturbations in inputs. These inputs appear almost
indistinguishable from natural images, yet they are incorrectly classified by
CNN architectures. This vulnerability of adversarial examples has led
researchers to focus on enhancing the robustness of deep learning models in
general, and CNNs in particular, by creating defense and detection methods to
distinguish adversarials inputs from natural ones. In this paper, we address
the adversarial robustness of CNNs through causal reasoning.
We propose CausAdv: a causal framework for detecting adversarial examples
based on counterfactual reasoning. CausAdv learns causal and non-causal
features of every input, and quantifies the counterfactual information (CI) of
every filter of the last convolutional layer. Then we perform statistical
analysis on the filters CI of every sample, whether clan or adversarials, to
demonstrate how adversarial examples indeed exhibit different CI distributions
compared to clean samples. Our results show that causal reasoning enhances the
process of adversarials detection without the need to train a separate
detector. In addition, we illustrate the efficiency of causal explanations as a
helpful detection technique through visualizing the causal features. The
results can be reproduced using the code available in the repository:
https://github.com/HichemDebbi/CausAdv. |
---|---|
DOI: | 10.48550/arxiv.2411.00839 |