Adversarial Analysis for Source Camera Identification
Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic methods. Existing adversarial attacks generate one-to-one noise, which means these methods have not learned the fingerprint informatio...
Gespeichert in:
Veröffentlicht in: | IEEE transactions on circuits and systems for video technology 2021-11, Vol.31 (11), p.4174-4186 |
---|---|
Hauptverfasser: | , , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent studies highlight the vulnerability of convolutional neural networks (CNNs) to adversarial attacks, which also calls into question the reliability of forensic methods. Existing adversarial attacks generate one-to-one noise, which means these methods have not learned the fingerprint information. Therefore, we introduce two powerful attacks, fingerprint copy-move attack, and joint feature-based auto-learning attack. To validate the performance of attack methods, we move a step ahead and introduce the higher possible defense mechanism relation mismatch. which expands the characterization differences of classifiers in the same classification network. Extensive experiments show that relation mismatch is superior in recognizing adversarial examples and prove that the proposed fingerprint-based attacks are more powerful. Both proposed attacks show excellent attack transferability to unknown samples. The Pytorch® implementations of these methods can download from an open-source GitHub project https://github.com/Dlut-lab-zmn/Source-attack . |
---|---|
ISSN: | 1051-8215 1558-2205 |
DOI: | 10.1109/TCSVT.2020.3047084 |