Attacking Image Splicing Detection and Localization Algorithms Using Synthetic Traces
Recent advances in deep learning have enabled forensics researchers to develop a new class of image splicing detection and localization algorithms. These algorithms identify spliced content by detecting localized inconsistencies in forensic traces using Siamese neural networks, either explicitly dur...
Gespeichert in:
Hauptverfasser: | , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in deep learning have enabled forensics researchers to
develop a new class of image splicing detection and localization algorithms.
These algorithms identify spliced content by detecting localized
inconsistencies in forensic traces using Siamese neural networks, either
explicitly during analysis or implicitly during training. At the same time,
deep learning has enabled new forms of anti-forensic attacks, such as
adversarial examples and generative adversarial network (GAN) based attacks.
Thus far, however, no anti-forensic attack has been demonstrated against image
splicing detection and localization algorithms. In this paper, we propose a new
GAN-based anti-forensic attack that is able to fool state-of-the-art splicing
detection and localization algorithms such as EXIF-Net, Noiseprint, and
Forensic Similarity Graphs. This attack operates by adversarially training an
anti-forensic generator against a set of Siamese neural networks so that it is
able to create synthetic forensic traces. Under analysis, these synthetic
traces appear authentic and are self-consistent throughout an image. Through a
series of experiments, we demonstrate that our attack is capable of fooling
forensic splicing detection and localization algorithms without introducing
visually detectable artifacts into an attacked image. Additionally, we
demonstrate that our attack outperforms existing alternative attack approaches.
% |
---|---|
DOI: | 10.48550/arxiv.2211.12314 |