Robust Attentive Deep Neural Network for Detecting GAN-Generated Faces
Generative Adversarial Network (GAN) based techniques can generate and synthesize realistic faces that cause profound social concerns and security problems. Existing methods for detecting GAN-generated faces can perform well on limited public datasets. However, images from existing datasets do not r...
Gespeichert in:
Veröffentlicht in: | IEEE access 2022, Vol.10, p.32574-32583 |
---|---|
Hauptverfasser: | , , , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | Generative Adversarial Network (GAN) based techniques can generate and synthesize realistic faces that cause profound social concerns and security problems. Existing methods for detecting GAN-generated faces can perform well on limited public datasets. However, images from existing datasets do not represent real-world scenarios well enough in terms of view variations and data distributions, where real faces largely outnumber synthetic ones. The state-of-the-art methods do not generalize well in real-world problems and lack the interpretability of detection results. Performance of existing GAN-face detection models degrades accordingly when facing data imbalance issues. To address these shortcomings, we propose a robust, attentive, end-to-end framework that spots GAN-generated faces by analyzing eye inconsistencies. Our model automatically learns to identify inconsistent eye components by localizing and comparing artifacts between eyes. After the iris regions are extracted by Mask-RCNN, we design a Residual Attention Network (RAN) to examine the consistency between the corneal specular highlights of the two eyes. Our method can effectively learn from imbalanced data using a joint loss function combining the traditional cross-entropy loss with a relaxation of the ROC-AUC loss via Wilcoxon-Mann-Whitney (WMW) statistics. Comprehensive evaluations on a newly created FFHQ-GAN dataset in both balanced and imbalanced scenarios demonstrate the superiority of our method. |
---|---|
ISSN: | 2169-3536 2169-3536 |
DOI: | 10.1109/ACCESS.2022.3157297 |