Detection, Attribution and Localization of GAN Generated Images
Recent advances in Generative Adversarial Networks (GANs) have led to the creation of realistic-looking digital images that pose a major challenge to their detection by humans or computers. GANs are used in a wide range of tasks, from modifying small attributes of an image (StarGAN [14]), transferri...
Gespeichert in:
Hauptverfasser: | , , , , , |
---|---|
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Recent advances in Generative Adversarial Networks (GANs) have led to the
creation of realistic-looking digital images that pose a major challenge to
their detection by humans or computers. GANs are used in a wide range of tasks,
from modifying small attributes of an image (StarGAN [14]), transferring
attributes between image pairs (CycleGAN [91]), as well as generating entirely
new images (ProGAN [36], StyleGAN [37], SPADE/GauGAN [64]). In this paper, we
propose a novel approach to detect, attribute and localize GAN generated images
that combines image features with deep learning methods. For every image,
co-occurrence matrices are computed on neighborhood pixels of RGB channels in
different directions (horizontal, vertical and diagonal). A deep learning
network is then trained on these features to detect, attribute and localize
these GAN generated/manipulated images. A large scale evaluation of our
approach on 5 GAN datasets comprising over 2.76 million images (ProGAN,
StarGAN, CycleGAN, StyleGAN and SPADE/GauGAN) shows promising results in
detecting GAN generated images. |
---|---|
DOI: | 10.48550/arxiv.2007.10466 |