Adversarial Threats to DeepFake Detection: A Practical Perspective
Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. State-of-the art DeepFake detection techniques...
Gespeichert in:
Hauptverfasser: | , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Facially manipulated images and videos or DeepFakes can be used maliciously
to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is
crucial to increase the credibility of social media platforms and other media
sharing web sites. State-of-the art DeepFake detection techniques rely on
neural network based classification models which are known to be vulnerable to
adversarial examples. In this work, we study the vulnerabilities of
state-of-the-art DeepFake detection methods from a practical stand point. We
perform adversarial attacks on DeepFake detectors in a black box setting where
the adversary does not have complete knowledge of the classification models. We
study the extent to which adversarial perturbations transfer across different
models and propose techniques to improve the transferability of adversarial
examples. We also create more accessible attacks using Universal Adversarial
Perturbations which pose a very feasible attack scenario since they can be
easily shared amongst attackers. We perform our evaluations on the winning
entries of the DeepFake Detection Challenge (DFDC) and demonstrate that they
can be easily bypassed in a practical attack scenario by designing transferable
and accessible adversarial attacks. |
---|---|
DOI: | 10.48550/arxiv.2011.09957 |