Robustness of AI-Image Detectors: Fundamental Limits and Practical Attacks
In light of recent advancements in generative AI models, it has become essential to distinguish genuine content from AI-generated one to prevent the malicious usage of fake materials as authentic ones and vice versa. Various techniques have been introduced for identifying AI-generated images, with w...
Gespeichert in:
Hauptverfasser: | , , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | In light of recent advancements in generative AI models, it has become
essential to distinguish genuine content from AI-generated one to prevent the
malicious usage of fake materials as authentic ones and vice versa. Various
techniques have been introduced for identifying AI-generated images, with
watermarking emerging as a promising approach. In this paper, we analyze the
robustness of various AI-image detectors including watermarking and
classifier-based deepfake detectors. For watermarking methods that introduce
subtle image perturbations (i.e., low perturbation budget methods), we reveal a
fundamental trade-off between the evasion error rate (i.e., the fraction of
watermarked images detected as non-watermarked ones) and the spoofing error
rate (i.e., the fraction of non-watermarked images detected as watermarked
ones) upon an application of diffusion purification attack. To validate our
theoretical findings, we also provide empirical evidence demonstrating that
diffusion purification effectively removes low perturbation budget watermarks
by applying minimal changes to images. The diffusion purification attack is
ineffective for high perturbation watermarking methods where notable changes
are applied to images. In this case, we develop a model substitution
adversarial attack that can successfully remove watermarks. Moreover, we show
that watermarking methods are vulnerable to spoofing attacks where the attacker
aims to have real images identified as watermarked ones, damaging the
reputation of the developers. In particular, with black-box access to the
watermarking method, a watermarked noise image can be generated and added to
real images, causing them to be incorrectly classified as watermarked. Finally,
we extend our theory to characterize a fundamental trade-off between the
robustness and reliability of classifier-based deep fake detectors and
demonstrate it through experiments. |
---|---|
DOI: | 10.48550/arxiv.2310.00076 |